Re: [squid-users] Skype for Business behind a transparent squid (TProxy) HTTP/S

2016-12-06 Thread Pieter De Wit
If that is the edge server then it will be the audio/video

Sent from my iPhone

> On 6/12/2016, at 12:35, Amos Jeffries  wrote:
> 
>> On 6/12/2016 11:46 a.m., Sameh Onaissi wrote:
>> 
>> I have a Ubuntu 16.04 server with Squid 3.5.22 installed. It acts as a 
>> gateway in a LAN.
>> 
>> It is configured to intercept HTTP and HTTPS traffic (Transparent). So 
>> iptables redirects were used for ports 80 and 443.
>> The server runs two scripts:
>> _*nat.sh*_ to bridge the two network cards, allowing LAN computers access to 
>> the internet through the servers Internet interface card.
>> *_iptables.sh_* which defines the ip rules and port forwarding: 
>> http://pastebin.com/SqpbmYQQ
>> 
>> BEFORE RUNNING iptables.sh...
>> 
>> When I connect a LAN computer to it, everything works as expected. Complete 
>> Internet access with some HTTP and HTTPS domains blocked/redirected to 
>> another page. Skype for Business logs in successfully.
>> 
>> AFTER RUNNING iptables.sh
>> Skype for Business disconnects, and fails to re-connect, normal skype works 
>> just fine.
>> 
>> 
>> I revised: 
>> https://support.office.com/en-us/article/Create-DNS-records-at-eNomCentral-for-Office-365-a6626053-a9c8-445b-81ee-eeb6672fae77?ui=en-US=en-US=US#bkmk_verify
>>  And added all DNS configurations on enom.
>> 
>> That got rid of the DNS error I was getting to another error saying service 
>> is temporarily unavailable.
>> 
>> Any suggestions to why this is happening? Any solutions?
> 
> Skype is sending something that is not HTTPS over port 443. The 
> on_unsupported_protocol feature in Squid-4 is needed to tunnel Skype traffic 
> when intercepting port 443.
> 
>> 
>> *Note:* both router and Ubuntu's WAN interface use Google's 8.8.8.8 DNS
>> 
> 
> I hope that means the border router is providing DNS recursive lookup with 
> 8.8.8.8 as the parent, with LAN devices using that border router as their DNS 
> server. That will minimize the damage Google is causing, but not avoid it 
> completely. If not you should make it so, or at least place another shared 
> resolver somewhere to do the necessary DNS caching.
> 
> 
> *Amos
> 
> *
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Testing - please ignore

2015-03-24 Thread Pieter De Wit
 

 ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache Chrome updates

2014-04-14 Thread Pieter De Wit

On 14/04/2014 19:32, Jasper Van Der Westhuizen wrote:

Hi all

I'm trying to cache chrome updates, but I see it always fetches over and
over again.

I have the following refresh pattern in my config.

refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
override-expire override-lastmod ignore-no-cache  ignore-reload
reload-into-ims ignore-private

I see the following behavior in my logs. This is for the same
client(source). Multiple entries, like it gets downloaded over and over
again.
Logs:

1397459574.511199 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
1397459579.924   4794 xxx.xxx.xxx.xxx TCP_MISS/206 141330 GET
http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx application/x-msdos-program
1397459591.067548 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
1397459596.709   4917 xxx.xxx.xxx.xxx TCP_MISS/206 283744 GET
http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx
application/x-msdos-program

Is my refresh pattern incorrect?


Dag se Jasper :)

Should it not read *pack.google

Cheers,

Pieter


Re: [squid-users] Blank page on first load

2014-04-07 Thread Pieter De Wit



My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load
balancer. From there I send all traffic to a ZScaler cache peer. In my
testing I have bypassed the cache peer but without any success.

Has anyone come across this problem before?




Hi Jasper,

Have you tried bypassing the F5's ? They try and do a bunch of clever 
things and this can mess with normal networking/caching


Cheers,

Pieter


Re: [squid-users] what is best method to connect two squid servers on the same router?

2013-05-17 Thread Pieter De Wit

Let's try this again


While you are busy with the deb packages, how about not putting in a
squid.conf and rather calling it squid.conf.default, or do include
configs like Apache ? Pretty please ? :)


I'm not sure I understand the first suggestion there about squid.cofn
and squid.conf.default?

I've tried to convince people to follow the Apache config include
style. But it gets really nasty to manage related directive ordering.
Or did you mean something else entirely?

Amos

Hi Amos,

Instead of the package containing squid.conf, make it contain
squid.conf.default or squid.conf.example

Nope - you are spot on, I meant the Apache config include style :) What
if you use 00_ 01_ 02_ etc ?

Cheers,

Pieter





Re: [squid-users] what is best method to connect two squid servers on the same router?

2013-05-13 Thread Pieter De Wit

On 13/05/2013 11:34, Amos Jeffries wrote:

On 13/05/2013 2:26 a.m., Fix Nichols wrote:
Heh if you are running Debian and lazy, you could 'apt-get install 
squid -y ; apt-get install squid3 -y' Youd have squid 2.7 and squid3 
both installed.

And wont work for much longer. We are in the process of replacing
squid with a transitional package to squid3.
But I know, thats just being lazy, you can install two squids just 
change the name and location of your binaries on one of them, and its 
cache directories, as well. Assuming squid is resident on a pc and 
not a router that is. It should be pretty straight forward.

Or do it properly and install Squid once. Just start it twice with two
squid.conf files containing different settings. Ta-Dah!


If, I'm understanding the original poster right though it sounds like
traffic is leaving the Squid and being diverted back into them in a
forwarding loop. Or that the traffic flows are getting mixed up somehow
in other ways.

Amos


While you are busy with the deb packages, how about not putting in a 
squid.conf and rather calling it squid.conf.default, or do include 
configs like Apache ? Pretty please ? :)


Cheers,

Pieter





Re: [squid-users] Redirect Youtube out second ISP

2013-02-20 Thread Pieter De Wit

Hi,

I would just run 2 squids on the same box, iptables mark the second 
one's traffic for the second uplink (using multiple routing tables etc). 
The first squid then simply forwards all youtube traffic by URL - no IP 
issues etc.


Cheers,

Pieter

On 21/02/2013 05:33, Ricardo Rios wrote:
No is not, is just what i see doing some sniffing on my mikrotik box, 
where my costumers connect, i am sure i still missing few IPs.


Regards


I am doing it this way currently on my router however knowing all of
youtube's IP addresses is annoying. Do you know if your list is
conclusive?

Ryan Stinn
Holy Trinity Catholic School Division

-Original Message-
From: Ricardo Rios [mailto:shorew...@malargue.gov.ar]
Sent: Monday, February 18, 2013 4:46 PM
To: Squid Users
Subject: Re: [squid-users] Redirect Youtube out second ISP

I have that working but using www.shorewall.net [1] Firewall, sending 
all

youtube request to provider number 4

/etc/shorewall/providers

#NAME NUMBER MARK DUPLICATE INTERFACE GATEWAY
OPTIONS COPY

cable2 2 2 main eth4:192.168.150.99 192.168.150.199
track,balance=3,loose,mtu=1492
cable3 3 3 main eth4:192.168.150.99 192.168.150.202
track,balance=3,loose,mtu=1492
silica 4 4 main eth6 186.0.190.241 track,balance=2,mtu=1500

/etc/shorewall/tcrules

#MARK SOURCE DEST PROTO DEST SOURCE USER TEST LENGTH TOS CONNBYTES 
HELPER


#Youtube
4:P 10.0.0.0/24 208.117.253.0/20
4:P 10.0.0.0/24 74.125.228.0/24
4:P 10.0.0.0/24 173.194.60.0/18
4:P 10.0.0.0/24 200.9.157.0/20

http://www.shorewall.net/Documentation_Index.html [2]Regards


- Original Message -


From: Stinn, Ryan ryan.st...@htcsd.ca To:
squid-users@squid-cache.org squid-users@squid-cache.org Cc: Sent:
Saturday, 16 February 2013 4:13 AM Subject: [squid-users] Redirect
Youtube out second ISP I'm wondering if it's possible to use squid to
redirect youtube out a second ISP line. We have two connections and
I'd like to push all youtube out the second connection.

Try this: acl dstdom_regex yt -i youtube tcp_outgoing_address yt
1.2.3.4 1.2.3.4 is IP address of 2nd line (should be on same machine as
squid). Amm.




Links:
--
[1] http://www.shorewall.net
[2] http://www.shorewall.net/Documentation_Index.html




Re: [squid-users] Redirect Youtube out second ISP

2013-02-15 Thread Pieter De Wit

On 16/02/2013 11:43, Stinn, Ryan wrote:

I'm wondering if it's possible to use squid to redirect youtube out a second 
ISP line. We have two connections and I'd like to push all youtube out the 
second connection.
I was thinking I could put a second squid proxy on that line and then redirect 
all youtube traffic to it, but I'm not sure how to start this.

Thanks

Ryan



Hi,

Look at the cache_peer_access option if you have the second server. You 
could also use a dual gateway option, but this needs some work on 
iptables/iproute.


Cheers,

Pieter


Re: [squid-users] better configuration for this server

2012-05-07 Thread Pieter De Wit

On 7/05/2012 23:38, Mário Sérgio Candian wrote:

Hi guys.

I have a server with this configuration:

Intel xeon E5-2670 with 8 cores, 2.60GHz (3.30 GHz with Turbo Boost), 20MB
of cache, QPI Link 8GT/s;
Hyper-threading, 1600MHz TDP of 115 Watts;
32GB ( 4 x 8GB ) RAM DDR3-1333MHz, Dual rank x4;
RAID 5;
3 HDs SAS 6Gbps of 300GB, 10k RPM, Hot Plug;
CacheCade 200GB SSD SAS

Is this enough to support 7000 simultaneous users? For 15000 users is
enough?

Whats the better configuration of squid with a server like this? Whats value
I can use in cache_mem? In cache_dir option, is advisable to use diskd or
aufs?

And the other options:

cache_swap_low ??
cache_swap_high ??
cache_replacement_policy ??
memory_replacement_policy ??

I'd like a configuration for maximum performance. Someone can help me?

Regards,
MSC


Hi,

What config have you tried so far ? What OS do you plan to run on here ? 
Squid can support millions of users if they don't do anything or as 
little as 100's of users. What type of traffic do you see going via this 
box ?


Off the bat I can tell you now to give up on RAID-5, it's not worth 
it. What network cards do you have in this machine. If this is a cache 
for an ISP (guessing by the numbers) you will need gigabit and more.


Cheers,

Pieter


Re: RES: [squid-users] better configuration for this server

2012-05-07 Thread Pieter De Wit

On 8/05/2012 00:56, Mário Sérgio Candian wrote:

Hi Peter.

Thanks for the answer.

I'd like to run FreeBSD in this server. I don't tried any config. I'll buy
this server but I need to know if this server supports the amount of users
that I have.

Yes, this will be a cache for my ISP. The server has gigabit network card. I
have a link of 400Mbps.

This server can handle this amount of users? 15000 users? About the
configuration squid, what do u recommend?

Regards,
MSC



Hi,

I would buy more cache drives (assuming you going to be using the SSD 
for this - change the 300gig drives to 146gig's to save some money, 
heck, you can even change it to a mirror set of 72gig, it's only the 
OS). I won't mess with the default settings since Squid is pretty tuned 
as it is. I have ran 6000 connections thru boxes smaller than that, way 
smaller. The only time I would change those settings is when you need to 
force more caching out of it.


I would also research the disk to memory caching formula on the Squid 
wiki (work out how much memory X gig on disk cache needs). More that 
this I can't offer without more input from you, otherwise I might as 
well deploy the box.


Cheers,

Pieter


[squid-users] Adding of cache_dir to running squid

2012-04-16 Thread Pieter De Wit

Hi All,

How do I get squid to init and use a new cache_dir without restarting ?

We have run out of caching space and I would like to use some newly 
allocated space :)


squid -z -- Says it's already running
squid -k reconfigure -- Adds squid.swap file, no directories etc

# /usr/sbin/squid -v
Squid Cache: Version 3.1.8

Cheers,

Pieter


[squid-users] Weird bytes size for CONNECT

2012-03-26 Thread Pieter De Wit

Hi All,

I am currently using Squid to proxy a web based training product and have 
noticed that the content length seems to be 0 (Yes, the example shows 
bytes, but it way more than that)


1332822131.951 311020 127.0.0.1 TCP_MISS/200 8470 CONNECT 
216.115.208.199:443 - DIRECT/216.115.208.199 -


The server is showing a connection flow of around 40k/sec. I am not sure 
if this is a control connection or something along those line. I will see 
when the training is completed but I was wondering if someone else has 
seen this ?


Cheers,

Pieter


Re: [squid-users] please help with port 110 and 25 on squid box

2012-02-21 Thread Pieter De Wit

On 22/02/2012 03:21, Muhammad Yousuf Khan wrote:

i am using debian lenny with squid ver 2.7.stable3

i have a squid box and also want to allow NAT port 25 and 110  for
email send receive , along with the squid service.

Please help me.

Thanks


Hi Muhammad,

This is beyond the scope of the squid mailing list. I would suggest that 
you read up on iptables and NAT support. Google for iptables NAT howto


Cheers,

Pieter


Re: [squid-users] Caching in Afghanistan

2012-02-18 Thread Pieter De Wit

On 18/02/2012 23:56, jbrodi...@gci.net wrote:
Hello there everyone, I'm currently deployed to Afghanistan and have 
recently set up a VSAT connection with approximately 18 users at peak. 
Not a large number of users however in our remote location a simple 
opening of a page with full user activity can slow things down to a 
near halt. I've been trying to do research on a caching server that 
would cache web images so that commonly opened websites would use LAN 
bandwidth rather then the VSAT bandwidth. I'll list the following 
setup I have for the 3 living quarter buildings, I'm not sure what 
exactly I need to do hardware/software wise but I was recommended to 
check this service out so hopefully you guys can let me know if Squid 
is exactly what I'm looking for:





- BLDG 43 Router DHCP Disabled, 7 Users
VSAT Hardware - x3 iDirect Router Modem - Cisco PoE-48 Port Switch 
- BLDG 42 Direct from Switch, 6 to 7 Users


- BLDG 41 Unmanaged-Switch, 5 Users


The VSAT - Router Modem connection is connected by a Rx and Tx 
Coaxial line (RG6). All other connections are Cat5e.


I really do appreciate any help that is given. Thank you in advance

-SGT B.

Hi,

I would say Squid will help a lot. I would put down a full Linux box, 
put bind,dhcpd (for 18 people this might be an overkill) and Squid on 
there. I would setup the box in a transparent proxy mode. I would put 
the bind server in caching mode.


Hardware wise, you won't need much. Take what you can get your hands on. 
More memory, fast disks (and spindles over space) is good. Given the 
size/speed of the link and the number of users, a desktop type PC will 
even do the trick.


Hope that helps !

Pieter


Re: [squid-users] Download cap using squid in linux.

2012-02-09 Thread Pieter De Wit

On 9/02/2012 23:48, Vivek Sharma wrote:

 Is there a way we can do following things using squid.

1. Put an upper cap on total Download size in a month per user (users 
are configured on LDAP).

2. Put an upper cap on the no of hours of usage per month per user.

I shall be obliged if someone can tell me an alternate solution not 
there by default in squid.


Thanks in anticipation.

regards,
Vivek


Hi Vivek,

I do believe you looking for:

http://wiki.squid-cache.org/Features/Quota

Note in the first section what is said about the current way to do it.

Cheers,

Pieter


Re: [squid-users] Question about Parent Cache

2012-02-07 Thread Pieter De Wit

On 8/02/2012 20:13, someone wrote:

Ok im running squid 3.16 on debian, and I have my proxy configured for a
parent cache thats upstream, well, What I have discovered is that my
local squid NEVER serves cache when its configured for parent cache,
only advantage im getting is from the upstream squid which serves cache
fine, so, my point is, its pretty much pointless to have a squid cache
down here configured for a parent cache, when the parent cache is the
only one serving up cache.

Sort of makes the whole deal pointless. Might as well just illiminate my
local cache and configure my firewall rules to just use the upstream
cache. Anything im missing here?


Hi,

Using tcpflow or wireshark, can you capture the headers of the packets 
coming back from your upstream ?


It might also pay to send the config lines from your squid.conf to the list.

Cheers,

Pieter


Re: [squid-users] forward loop

2012-02-04 Thread Pieter De Wit

Hi,

Do you have a proxy set in the client to 192.168.40.2 port 3128 ? If so, 
that is your problem. Also, check if the re-direction rule (on your 
firewall) is excluding the outbound connection made by squid.


You could be ending up with the squid server's port 80 connection 
getting looped back to itself.


What is doing the redirection ? If it is iptables, can you paste the 
relevant sections of iptables ?


Cheers,

Pieter

On 4/02/2012 20:02, Mustafa Raji wrote:

hi Pieter
this is my configuration file,

#define access list for network
acl my_network src 192.168.12.0/24
acl my_network src 192.168.7.0/24
acl my_network src 192.168.40.0/24
acl my_network src 10.10.10.0/24

#allow http access for the network
http_access allow my_network

#squid default acl configuration
acl all src all
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
http_port 3128 intercept
http_port 8080

#cache configuration
#define core dump directory
visible_hostname squidtest
coredump_dir /var/coredump

#define cache replacement policy
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA

#define cache memory
cache_mem 512 MB

#define squid log files
access_log /var/log/squid3/access.log
emulate_httpd_log off
cache_store_log none

#include /etc/squid3/refresh.conf
cache_log /var/log/squid3/cache.log

#define cache direcotry
cache_dir aufs /var/squid/aufs1 5000 16 256
cache_dir aufs /var/squid/aufs2 5000 16 256
cache_dir aufs /var/squid/aufs3 5000 16 256


maximum_object_size 512 MB


ipcache_size 5120

cache_swap_low 85
cache_swap_high 95

cache_mgr mustafa.r...@yahoo.com
cachemgr_passwd x all

thank you with my best regards


--- On Thu, 2/2/12, Pieter De Witpie...@insync.za.net  wrote:


From: Pieter De Witpie...@insync.za.net
Subject: Re: [squid-users] forward loop
To: squid-users@squid-cache.org
Date: Thursday, February 2, 2012, 10:08 AM
Hi Mustafa,

Can you please post your squid.conf ? (Remove all comments
and passwords
etc)

Cheers,

Pieter

On 2/02/2012 23:04, Mustafa Raji wrote:

hi
please i have a forward loop warning in my cache.log

what is the cause of it

i check the internet and find the cause is using peer

squid configuration and the two cache server has the same
visible_hostname but i never used the peer in my
configuration i have one cache server with intercept
configuration please can you tell me what is causes to the
cache forward loop the warning message is from cache.log

2012/02/02 12:02:23| WARNING: Forwarding loop detected

for:

POST

/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bnpuss,ConnType=KeepAlive
HTTP/1.1

Accept: */*
Content-Type: application/octet-stream
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Win32)
UserAgent: blugro2relay.groove.microsoft.com
Content-Length: 22
Pragma: no-cache
Expires: 0
Host: 192.168.40.2:3128
Via: 1.0 squidtest (squid/3.1.11), 1.1 squidtest

(squid/3.1.11), 1.1 squidtest (squid/3.1.11)

X-Forwarded-For: 192.168.40.1, 192.168.40.2,

192.168.40.2

Cache-Control: no-cache, max-age=0
Connection: keep-alive

and this error continues to appear with increasing the

values of via and x-forward-for

my access.log file show this information at the same

time of the loop

the ip 192.168.40.2 is the CacheServer ip

Thu Feb  2 12:02:23 2012  0

192.168.40.1 TCP_IMS_HIT/304 287 GET 
http://crl.microsoft.com/pki/crl/products/WinPCA.crl -
NONE/- application/pkix-crl

Thu Feb  2 12:02:24 2012898

192.168.40.1 TCP_MISS/400 237 POST http://65.55.122.232/ - DIRECT/65.55.122.232 
-

Thu Feb  2 12:02:24 2012  8

192.168.40.2 NONE/400 69168 NONE error:request-too-large -
NONE/- text/html

Thu Feb  2 12:02:24 2012

19 192.168.40.2 TCP_MISS/400 69275 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$

Thu Feb  2 12:02:24 2012

23 192.168.40.2 TCP_MISS/400 69377 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$

Thu Feb  2 12:02:24 2012

26 192.168.40.2 TCP_MISS/400 69479 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$

Thu Feb  2 12:02:24 2012

30 192.168.40.2 TCP_MISS/400 69581 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$

Thu Feb  2 12:02:24 2012

34 192.168.40.2 TCP_MISS/400 69683 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$

Thu Feb  2 12:02:24 2012

37 192.168.40.2 TCP_MISS/400 69785 POST 

Re: [squid-users] forward loop

2012-02-02 Thread Pieter De Wit

Hi Mustafa,

Can you please post your squid.conf ? (Remove all comments and passwords 
etc)


Cheers,

Pieter

On 2/02/2012 23:04, Mustafa Raji wrote:

hi
please i have a forward loop warning in my cache.log what is the cause of it
i check the internet and find the cause is using peer squid configuration and 
the two cache server has the same visible_hostname but i never used the peer in 
my configuration i have one cache server with intercept configuration please 
can you tell me what is causes to the cache forward loop the warning message is 
from cache.log

2012/02/02 12:02:23| WARNING: Forwarding loop detected for:
POST 
/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bnpuss,ConnType=KeepAlive
 HTTP/1.1
Accept: */*
Content-Type: application/octet-stream
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Win32)
UserAgent: blugro2relay.groove.microsoft.com
Content-Length: 22
Pragma: no-cache
Expires: 0
Host: 192.168.40.2:3128
Via: 1.0 squidtest (squid/3.1.11), 1.1 squidtest (squid/3.1.11), 1.1 squidtest 
(squid/3.1.11)
X-Forwarded-For: 192.168.40.1, 192.168.40.2, 192.168.40.2
Cache-Control: no-cache, max-age=0
Connection: keep-alive

and this error continues to appear with increasing the values of via and 
x-forward-for
my access.log file show this information at the same time of the loop
the ip 192.168.40.2 is the CacheServer ip

Thu Feb  2 12:02:23 2012  0 192.168.40.1 TCP_IMS_HIT/304 287 GET 
http://crl.microsoft.com/pki/crl/products/WinPCA.crl - NONE/- 
application/pkix-crl
Thu Feb  2 12:02:24 2012898 192.168.40.1 TCP_MISS/400 237 POST 
http://65.55.122.232/ - DIRECT/65.55.122.232 -
Thu Feb  2 12:02:24 2012  8 192.168.40.2 NONE/400 69168 NONE 
error:request-too-large - NONE/- text/html
Thu Feb  2 12:02:24 2012 19 192.168.40.2 TCP_MISS/400 69275 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 23 192.168.40.2 TCP_MISS/400 69377 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 26 192.168.40.2 TCP_MISS/400 69479 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 30 192.168.40.2 TCP_MISS/400 69581 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 34 192.168.40.2 TCP_MISS/400 69683 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 37 192.168.40.2 TCP_MISS/400 69785 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 41 192.168.40.2 TCP_MISS/400 69887 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 44 192.168.40.2 TCP_MISS/400 69989 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 48 192.168.40.2 TCP_MISS/400 70091 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 51 192.168.40.2 TCP_MISS/400 70193 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 55 192.168.40.2 TCP_MISS/400 70295 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$
Thu Feb  2 12:02:24 2012 58 192.168.40.2 TCP_MISS/400 70397 POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/n7hngumkwg46fvvc2zuwzzcd6y43i3da4bn$



after that this status appear to me in cache.log

2012/02/02 12:02:33| statusIfComplete: Request not yet fully sent POST 
http://192.168.40.2:3128/2.0/blugro2relay.groove.microsoft.com/3m4dy9mseq7e9h39xecabcaqj24zjcgw4zts55s,ConnType=LongLived;

and in 12:02:35 the server is return to work normally

please can you help me in finding what is the cause of this warning




Re: [squid-users] UNSUBSCRIBE!!!!

2012-01-24 Thread Pieter De Wit

Control-U (Thunderbird) shows:

List-Unsubscribe: mailto:squid-users-unsubscr...@squid-cache.org

Hope that helps !

On 25/01/2012 05:47, Oliver Marshall wrote:

Same issue here.

I'm just marking it as spam as there's no clear unsubscribe link anywhere.

-Original Message-
From: Alona Rossen [mailto:aros...@opentext.com]
Sent: 24 January 2012 14:50
To: Amos Jeffries; Carlos Manuel Trepeu Pupo
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] UNSUBSCRIBE

How can I unsubscribe from this mailing list? I submitted Unsubscribe request 
awhile ago, but it was ignored.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: January 23, 2012 5:56 PM
To: Carlos Manuel Trepeu Pupo
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] save last access

On 24.01.2012 09:19, Carlos Manuel Trepeu Pupo wrote:

By user just real people and maybe his IP.
By Surf Last -  only when a user is loading the page I need the report
in real-time but maybe one user surfed many days ago, but I need to
save this track.

I use Squid 3.0 Stable1. What daemon can I use to make this ?

NOTE: STABLE1 is no longer qualifying as a *current* release. At minimum please 
upgrade to 3.0.STABLE26, which is at least still security patched and 
informally supported.


logger2sql or a custom script that works like it should meet your needs with 
3.0.


Amos



Thanks a lot for your answer !!

On Sat, Jan 21, 2012 at 1:23 AM, Amos Jeffriessqu...@treenet.co.nz
wrote:

On 21/01/2012 5:06 a.m., Carlos Manuel Trepeu Pupo wrote:

Hello ! I need to know when my users surf last time, so I need to
know
if there is any way to have this information and save to an sql
database.


The Squid log files are text data. So the answer is yes.

Please explain user.  Only real people? or any machine which
connects to
Squid?

Please explain surf last. Only when a user is loading the page? or
even
when their machine is doing something automatically by itself?

Please explain under what conditions you are wantign the information
back.
monthly report? weekly? daily? hourly? real-time?


Current Squid releases support logging daemons which can send log
data
anywhere and translate it to any form. Squid-3.2 bundles with a DB
(database) daemon which is also available from SourceForge for
squid-2.7

Older Squid need log file reader daemons. Like squidtaild, and
logger2sql.

Amos


--
Network Support
Online Backups
Server Management

Tel: 0845 307 3443
Email: oliver.marsh...@g2support.com
Web: http://www.g2support.com
Twitter: g2support
Newsletter: http://www.g2support.com/newsletter
Mail: 2 Roundhill Road, Brighton, Sussex, BN2 3RF

Have you said something nice about us to a friend or colleague ?  Let us say 
thanks. Find out more at www.g2support.com/referral

G2 Support LLP is registered at Mill House, 103 Holmes Avenue, HOVE
BN3 7LE. Our registered company number is OC316341.





Re: [squid-users] HELP: UPDATE

2011-12-31 Thread Pieter De Wit

On 31/12/2011 21:32, someone wrote:

Ok, I rm -rf`d all directories named squid from my box thinking that
attempting to do a fresh install after would fix everything NOPE, and
wtf, apparently the install binary wont recreate the directories now,
yay! wtf symlink madness any suggestions how to just get squid to
reinstall from apt would be, so awesome.

I removed squid due to a botched attempt to build and isntall from
source, ok, then when I reinstalled squid after doing a make uninstall,
squid was complaining about the error pages missing, it was looking for
them in  a new dir that it should not have been.

rm -rf /usr/share/squid3/--squid3.1.6 SHOULD be looking here
rm -rf   /usr/share/squid-langpack---but,squid keeps looking for them
here



deviant:/# apt-get install squid3
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
   squidclient squid-cgi resolvconf
The following NEW packages will be installed:
   squid3
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 0 B/1,445 kB of archives.
After this operation, 3,666 kB of additional disk space will be used.
Selecting previously deselected package squid3.
(Reading database ... 249217 files and directories currently installed.)
Unpacking squid3 (from .../squid3_3.1.6-1.2+squeeze1_i386.deb) ...
Processing triggers for man-db ...
Setting up squid3 (3.1.6-1.2+squeeze1) ...
Restarting Squid HTTP Proxy 3.x: squid3FATAL: MIME Config
Table /usr/share/squid3/mime.conf: (2) No such file or directory
Squid Cache (Version 3.1.6): Terminated abnormally.
CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
Maximum Resident Size: 16320 KB
Page faults with physical i/o: 0
  failed!



cache.log
===

2011/12/31 00:10:46| Starting Squid Cache version 3.1.6 for
i486-pc-linux-gnu...
2011/12/31 00:10:46| Process ID 25488
2011/12/31 00:10:46| With 65535 file descriptors available
2011/12/31 00:10:46| Initializing IP Cache...
2011/12/31 00:10:46| DNS Socket created at [::], FD 7
2011/12/31 00:10:46| DNS Socket created at 0.0.0.0, FD 8
2011/12/31 00:10:46| Adding nameserver 127.0.0.1 from squid.conf
2011/12/31 00:10:46| errorpage.cc(293) errorTryLoadText:
'/usr/share/squid3/errors/templates/ERR_LIFETIME_EXP': (2) No such file
or directory
FATAL: failed to find or read error text file.
Squid Cache (Version 3.1.6): Terminated abnormally.
CPU Usage: 0.040 seconds = 0.008 user + 0.032 sys
Maximum Resident Size: 41792 KB
Page faults with physical i/o: 0




Hi,

What does dpkg -l | grep squid show ?

Cheers,

Pieter


Re: [squid-users] Reverse Proxy Configuration

2011-12-28 Thread Pieter De Wit

Hi Roman,

What version of Squid are you using ?

Cheers,

Pieter

On Wed, 28 Dec 2011, Roman Gelfand wrote:


Consider the following configuration lines


https_port 443 cert=/etc/apache2/certs/server.pem
key=/etc/apache2/certs/server.key vhost vport
cache_peer 127.0.0.1 parent 8443 0 ssl no-query originserver
sslflags=DONT_VERIFY_PEER front-end-https login=PASS

What if there is more site ssl sites which I would like to forward,
how can I accomplish that?

Also, it appears that alternate CN names are not being recognized.
Is there anything to do about that?

Thanks in advance



Re: [squid-users] MAC addresses as the only ACL restriction

2011-12-08 Thread Pieter De Wit

Hi,

MAC address filtering will only work on the same LAN segment. The mac 
address for your IP will be the gateway of the squid server if you are 
connecting remotely.


Cheers,

Pieter

On Thu, 8 Dec 2011, Inter Node wrote:


Hello everyone, I have a question (and it may be a stupid one), but here goes. 
I use squid on my server for privacy reasons when I surf the web. I currently 
use IP addresses as my access restriction; only my home IP has access to my 
squid server. I was thinking of transitioning to MAC addresses for ACL 
purposes, so that I can use my proxy when I'm on the bus or at work. Are MAC 
addresses any less secure as an ACL restriction than IP addresses?

Thank you for your time!




Re: [squid-users] Non-transparent port works, transparent doesn't

2011-10-17 Thread Pieter De Wit

Hi,

Maybe I am missing it, but where is the rule to REDIRECT port 80 to 13128 
in iptables ?


Cheers,

Pieter

On Tue, 18 Oct 2011, zozo zozo wrote:


I'm trying to make squid work as transparent proxy on CentOS, squid ver is 
3.2.0.12, with ecap enabled.
The problem is that squid doesn't work on transparent port and responds on 
non-transparent port.

I've simplified configuration as possible to exclude access errors
Here's my squid.conf:

http_port 13128 intercept
http_port 13129
acl our_networks src 1.2.3.0/24
acl localnet src 127.0.0.1/24
http_access allow all
http_access allow our_networks
http_access allow localnet

cache_mem 0 MB
cache deny all

#end of squid.config

1.2.3.0 is my client IP, but I do stuff on server and it shouldn't matter since allow all. I 
tried both intercept and transparent
With this config squid works on 13129 - I check it by telnet 127.0.0.1 13129, 
then GET - I get html of squid error page, which means squid is alive and 
listening. Also browser request from my client machine from outside is served.
But when I telnet 127.0.0.1 13128, curios thing happens:

Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.

That is, port is listened to and connection happens, but it's closed 
immediately. Same if I use other IP than 127.0.0.1.

I have been able to configure squid as transparent proxy on Ubuntu and Ubuntu 
server, but now staging environment has CentOS, and I've been fighting it for 
several days now.
Just in case I'm also attaching iptables.

[root@host13516 etc]# iptables-save
# Generated by iptables-save v1.3.5 on Tue Oct 18 03:52:54 2011
*mangle
:PREROUTING ACCEPT [1490:127866]
:INPUT ACCEPT [1490:127866]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1391:507115]
:POSTROUTING ACCEPT [1391:507115]
COMMIT
# Completed on Tue Oct 18 03:52:54 2011
# Generated by iptables-save v1.3.5 on Tue Oct 18 03:52:54 2011
*filter
:INPUT ACCEPT [1490:127866]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1391:507115]
COMMIT
# Completed on Tue Oct 18 03:52:54 2011


Maybe it's something about how squid was compiled? But I thought iptables 
support is enabled by default.

I humbly ask for help.




Re: [squid-users] RAID0, LVM or JBOD for best performance squid proxy server?

2011-08-05 Thread Pieter De Wit

Hi Roberto,

Each of the systems you mention will only add an extra layer to the 
storage solution. Squid (not sure from which version but I am very sure 
it's main stream on all distros) already has support for multiple cache 
directories so my suggestion (if you don't need LVM to extend or move 
physical disks etc) is to make the disks normal mount points. The File 
system that you use will have to be researched (ext4 vs xfs vs reseirfs 
vs ...) but I have used ext3/4 with great success (at least enough for 
me not to complain :) )


See http://wiki.squid-cache.org/BestOsForSquid at the bottom for File 
Systems etc


The only one that *might* improve things is RAID0 but I can't really see 
this as squid won't be writing *that* much (on a 100meg connection)


You can also read up on : http://wiki.squid-cache.org/SquidFaq/RAID

Cheers,

Pieter

On 6/08/2011 09:33, rpere...@lavabit.com wrote:


I need to choise the storage type for a squid proxy server with 100Mb/s
traffic.

RAID0, LVM or JBOD

Which is better for performance (don't care the data reliability) ?

This storage is only for the squid cache (not system disk).

I have 3 disk for the array.

I'm using centos 5.

Thanks for any advice.

roberto







Re: [squid-users] RAID0, LVM or JBOD for best performance squid proxy server?

2011-08-05 Thread Pieter De Wit

On 6/08/2011 10:34, rpere...@lavabit.com wrote:

Hi Roberto,

Each of the systems you mention will only add an extra layer to the
storage solution. Squid (not sure from which version but I am very sure
it's main stream on all distros) already has support for multiple cache
directories so my suggestion (if you don't need LVM to extend or move
physical disks etc) is to make the disks normal mount points. The File
system that you use will have to be researched (ext4 vs xfs vs reseirfs
vs ...) but I have used ext3/4 with great success (at least enough for
me not to complain :) )

See http://wiki.squid-cache.org/BestOsForSquid at the bottom for File
Systems etc

The only one that *might* improve things is RAID0 but I can't really see
this as squid won't be writing *that* much (on a 100meg connection)

You can also read up on : http://wiki.squid-cache.org/SquidFaq/RAID

Cheers,

Pieter

Hi Pieter. Thanks for your help !!

You say something like this?

cache_dir aufs / disk1/squid-cache/squid 10 64 256
cache_dir aufs / disk2/squid-cache/squid 10 64 256
cache_dir aufs / disk3/squid-cache/squid 10 64 256

I should  add something more to  balance the load?

regards

roberto



Hi Roberto,

cache_dir aufs /disk1/bla

Is what I had in mind :) so just a fix of a typo I believe.

Cheers,

Pieter


Re: [squid-users] Squid Bandwidth

2011-08-03 Thread Pieter De Wit

On 4/08/2011 05:23, viswanathan sekar wrote:

Hello Squid_Team,

is squid can handle multiples of Gbps traffic ?

Is there any patch for squid to handle multiples for Gbps traffic ?


Thanks
-Viswa

Hi Viswa,

Stock standard source will do the trick. It depends more on hardware 
than the software. Look for a recent post by Amos, it will have the 
recommended versions, or check on the website.


Cheers,

Pieter


[squid-users] Caching of Big objects (bigger than memory limit)

2011-05-11 Thread Pieter De Wit

Hi List,

From my understanding, Squid will add an object into memory, then page it 
out to disk, as the memory limit get's full. (Barring another 1000 checks 
that I didn't mention :) )


My question is, what will happen with an object that is bigger than 
maximum_object_size_in_memory ?


Here is my setup:

maximum_object_size_in_memory 16 MB
maximum_object_size 1 GB

cache_mem is 512MB

If I request a cachable object of 30meg, what will happen to it ?

Thanks,

Pieter


Re: [squid-users] Caching of Big objects (bigger than memory limit)

2011-05-11 Thread Pieter De Wit
Thanks Amos, Yeah - forgot to mention that I am on 3.1.6, latest Deb 6 
version.


Cheers,

Pieter

On Thu, 12 May 2011, Amos Jeffries wrote:


On 12/05/11 13:32, Pieter De Wit wrote:

Hi List,

 From my understanding, Squid will add an object into memory, then page
it out to disk, as the memory limit get's full. (Barring another 1000
checks that I didn't mention :) )

My question is, what will happen with an object that is bigger than
maximum_object_size_in_memory ?


It goes to disk immediately on arrival and only the window of bytes not yet 
sent to the clients stays in memory.


Some versions of Squid (known for their excessive memory consumption) will 
keep the whole thing in memory until finished, but that bug is fixed in 
current releases.


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.12
 Beta testers wanted for 3.2.0.7 and 3.1.12.1



Re: [squid-users] Squid v/s Apache's reverse proxy

2011-05-10 Thread Pieter De Wit

Hi John,

I have seen it doing 1500 +-/sec (Peaks into the 2000) without the CPU 
breaking into a sweat (as in less than 10%even 5). This is 3.1 thou 
(Which I thought was slower than 2.7 ?)


Surely if it was crappy code (which it's not) the CPU would be the 
bottleneck to crack 2000/sec (Unless the disks was slowing down the 
caching, but again, this would be shown on IOWait time on the CPU - the 
10/5% I was referring to was idle time)


Cheers,

Pieter

On 11/05/2011 06:27, Jawahar Balakrishnan (JB) wrote:

We are evaluating a vendor who claims that their Apache proxy based
solution performs better than Squid because squid doesn't scale on a
multi-cpu / multi core servers whereas apache does scale nicely. Their
tests show squid version 2.7 to perform at 2000 requests/sec while the
apache solution performs closer 10K requests/sec and also shows the
newer versions to be slower and is better off as a forward proxy
solution.

I would love to hear from anyone who might have done a similar
comparison or if anyone has any thoughts on this. I definitely don't
doubt their claims but it came as a surprise to me.

Thanks
JB




Re: [squid-users] Squid - Dual WAN Links

2011-05-09 Thread Pieter De Wit

Hi John,

I have done this before with 3 DSL links. I personally would leave squid 
out of the picture and configure the OS to do this. Once that is 
working, there are a few tricks to get Squid to load balance the 
traffic. It's not really the scope of this list to cover it.


Amos - I am wondering if it won't be useful to put a little page up on 
the wiki for this ? I see the question pops up quite often ?


Cheers,

Pieter

On 10/05/2011 00:48, John Sayce wrote:

I have two squid proxy servers.  I use a PAC script to assign the proxy servers 
with one being a primary and one being a failover.  This works great but I 
would like to achieve a similar configuration with the access to the WAN links 
from proxy servers.  I have two Wan DSL Links and two dsl routers.  I'm open to 
changing this configuration but I'd like to avoid a dual wan router as this 
would mean no redundancy if the router fails.

At current I have no requirement for load balancing although in future I may 
assign bandwidth sensitive applications to the failover.   I could probably 
write a script to check the wan links and the routers which could then change 
the network settings and restart if required but this seems a rather inelegant 
solution.  Is there a way of doing this with squid or has anyone got any better 
ideas?

Regards
John Sayce





Re: [squid-users] Blocking traffic from on spesific source only.

2011-04-08 Thread Pieter De Wit

Hi Thomas,

With router - do you mean that is the IP the clients will hit the 
squid with ? If so, there are two ways to do this. Since you are running 
a transparent proxy, you will have some firewall rules port forwarding, 
you could block them there. The other way is to make an ACL with src set 
as the IP, then block it at the squid level. These blocks has to be 
before any allow rules (or at least the allow rule for that IP)


Hope that helps.

Pieter

On 9/04/2011 06:28, Thomas v Graan wrote:

Hi there.
I run a transparent proxy inside our network using Squid 2.6 Stable21 on
Centos 5.3.
I have been asked by a customer to block certain traffic originating
from their outgoing router with fixed IP-address.
This blocking should not affect other customers on the network.

Can anyone help, please.

Regards

Thomas







Re: [squid-users] Squid slows under load

2011-03-03 Thread Pieter De Wit

Hi Julian,

The one stat that I can't see here is disk access. I know you said that 
you have SSD's, but what is the disk stats for your logging volume and 
the squid volume ? If you totally bypass the proxy, does it improve ? 
(could be that the squid server is getting shaped ?)


Cheers,

Pieter

On 4/03/2011 06:46, Julian Pilfold-Bagwell wrote:

Hi All,

I've been having some problems with Squid and Dansguardian for a while 
now and despite lots of time on Google, haven't found a solution.


The problem started a week or so back when I noticed that squid was 
slowing.  A quick look through the logs showed it was running out of 
file descriptors so I upped the level to take account.  The server was 
ancient so I bought in an HP Proliant DL120 (dual Pentium 2.80Ghz 
G6950 CPU  4GB of RAM).  At the same time, I bought in 2 x 60GB SSD 
drives to use as cache space with the system on a RAID 1 array with 
160GB SATA II disks.


On this, I installed Ubuntu server 10.04.2 LTS with Squid 2.7 (from 
apt) and Dansguardian 2.10.1.1. The kernel version is 2.6.32-24-server 
and the server authenticates via a Samba PDC (v 3.5.6) using 
OpenLDAP/Winbind.  The Samba version on the proxy machine is v 3.4.7 
as supplied from the Ubuntu repo.


This however also seems to run out of steam.  My first thought was 
that it may have been running out of RAM so I ran htop.  Both CPUs 
were topping out at 20% and out of the 4GB of RAM, 1.3GB was used.  
Next I checked the load on the NIC and found that it was running on 
average 400kB/s, with the odd burst at 5MB/s.  As the load increased, 
web pages were taking up to 30-45 seconds to load.  I bypassed 
Dansguardian and went in on 3128 with no change in performance.


Following the recommendations on other sites discovered via Google, I 
tuned and tweaked settings with no real benefit and I can't see that I 
changed anything to cause it to happen. The log files look fine, I 
have 1 file descriptors available and cachemgr shows plenty of 
spares. There are 50% more NTLM authenticators than are in use at any 
given time.


The config file for Squid is shown below.  I have had the number of 
authenticators set to 400 as I have 350 users but the number in use 
still peaked at around 50. If I've been a numpty and done something 
glaringly obvious, I'd be grateful if someone could point it out. If 
not, ask for info and I'll provide it.


Thanks,

Jools


## Squid.conf
## Start with authentication for clients

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm_param children 100
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 100
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

## Access Control Lists for filter bypass ##
acl realtek dstdomain .realtek.com.tw
acl tes dstdomain .tes.co.uk
acl glogster dstdomain .glogster.com
acl adobe-installer dstdomain .adobe.com # allow installs from adobe 
download manager
acl actihealth dstdomain .actihealth.com .actihealth.net # Allow 
direct access for PE dept activity monitors
acl spybotupdates dstdomain .safer-networking.org .spybotupdates.com # 
Allow updates for Spybot SD
acl sims-update dstdomain .kcn.org.uk .capitaes.co.uk 
.capitasolus.co.uk .sims.co.uk # Allow SIMS to update itself directly

acl kcc dstdomain .kenttrustweb.org.uk # Fix problem with county
acl frenchconference dstdomain flashmeeting.e2bn.net
acl emsonline dstdomain .emsonline.kent.gov.uk
acl clamavdstdomain .db.gb.clamav.net
acl ubuntudstdomain .ubuntu.com .warwick.ac.uk
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl windowsupdate dstdomain download.adobe.com
acl comodo dstdomain download.comodo.com
acl simsb2b dstdomain emsonline.kent.gov.uk
acl powerman dstdomain pmstats.org
acl ability dstdomain ability.com
acl fulston dstdomain fulstonmanor.kent.sch.uk
acl httpsproxy dstdomain .retiredsanta.com .atunnel.com .btunnel.com 
.ctunnel.com .dtunnel.com .ztunnel.com .partyaccount.com


## Access Control for filtered users ##
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl ntlm_users proxy_auth REQUIRED

acl SSL_ports port 443  # https
acl SSL_ports port 563  

Re: [squid-users] Disk space for two node squid

2011-02-24 Thread Pieter De Wit

On 25/02/2011 06:04, N3O wrote:

Hi

I want to create a 2-node squid reverse caching layer for an apache
server that has these features:

  - 2 Xeon@ 2GHz
  - 2 GB RAM
  - 72GB HD RAID1
  - RHEL with Kernel 2.6.9
  - 1 EXT3 filesystem

   Traffic is a million visits/month

My question is, how much disk space should i allocate for caching data
on the squid nodes?
my idea is to use amazon Ec2 instances for them, but i dont know how
much disk space should
i use for caching data under this scenario...

Any recomendation will be welcomed!

Greetings


Hi,

How big is the static content on your website ? I would start with a 
minimum of that, unless you have gig's and gig's of static content that 
might be viewed once or twice a year. I would not go less than 5 gig 
tho. It might sound like a little bit, but you will be very surprised 
how much you can fit in 5gig, web wise, and how well Squid handles all 
of that.


Cheers,

Pieter


[squid-users] X-Forwarded-For + Squid Version 3.0.STABLE8

2011-02-20 Thread Pieter De Wit

Hi Guys,

I run a reverse proxy for a client. They are using XFF for restricting 
certain content to IP.


We have noted that the following doesn't appear to work as it should:

header_replace X-Forwarded-For allow all

My understanding is that this will cause squid to replace the XFF header 
with it's own client IP ?


I see there is various answers about this on the internet so I would like 
to know which one applies to this setup.


Here is some more details on the proxy chain:

client - proxy1 - proxy2 - origin web server

Proxy 1 should replace the XFF header no matter what, so that if client 
is behind a proxy, it doesn't matter.


Proxy 2 should just pass the header as per normal, it doesn't matter if it 
adds an IP to the header.


I am looking at replacing these boxes with Debian 6 boxes over the next 
week or so, but would really like to nail this one now :)


Thanks,

Pieter


Re: [squid-users] X-Forwarded-For + Squid Version 3.0.STABLE8

2011-02-20 Thread Pieter De Wit

Hi Amos,

Thanks for the reply - I remember seeing the doc bug :)

I am building the Deb6 boxes as we speak (ext4+squid 3.1 is sounding very 
nice)


Cheers,

Pieter

On Mon, 21 Feb 2011, Amos Jeffries wrote:


On Mon, 21 Feb 2011 12:16:46 +1300 (NZDT), Pieter De Wit wrote:

Hi Guys,

I run a reverse proxy for a client. They are using XFF for
restricting certain content to IP.

We have noted that the following doesn't appear to work as it should:

header_replace X-Forwarded-For allow all

My understanding is that this will cause squid to replace the XFF
header with it's own client IP ?


No this will replace the content of X-Forwarded-For with the text allow 
all.


BUT, only if there is a corresponding request_header_access X-Forwarded-For 
deny line (or reply_header_access).


FWIW there was a documentation bug for a while indicating that Squid would 
add its *own* IP to XFF.
 Squid will never do that. Only the remote visitors/client IP is added to 
XFF.




I see there is various answers about this on the internet so I would
like to know which one applies to this setup.



In 3.0 you can use the header access denial + replace to strip out the 
existing header and add any desired forgery.


In 3.1+ you can use forwarded_for truncate to erase a prior history trace 
and perform what you describe in a much cleaner way. This is not usually a 
good idea and only useful to paper around broken web app security 
vulnerabilities.




Here is some more details on the proxy chain:

client - proxy1 - proxy2 - origin web server

Proxy 1 should replace the XFF header no matter what, so that if
client is behind a proxy, it doesn't matter.


Well, truncate will do that, BUT using an origin server app which only pulls 
the *newest* IP off the list will be much better. And will protect against 
malicious forgery attacks as well.




Proxy 2 should just pass the header as per normal, it doesn't matter
if it adds an IP to the header.

I am looking at replacing these boxes with Debian 6 boxes over the
next week or so, but would really like to nail this one now :)


Then you will have access to 3.1.6+ with the above mentioned forwarded_for 
extensions.


In this setup in order to pass the client IP to the origin I would advise 
using this config:


proxy 1:
 - nothing special. It will add the real client IP to X-Forwarded-For: 
header.
 - you MAY use forwarded_for truncate here to explicitly erase any past 
garbage. But see above.


proxy 2:
 forwarded_for transparent

- this will mean proxy 2 preserves the client IP proxy1 added as latest on 
the list, by not mentioning proxy1

- BE CAREFUL that the only way requests can reach proxy2 is via proxy1.

origin:
- trust proxy 2 as provider of X-Forwarded-For and grab the latest client 
from the XFF which it hands over.


Amos




Re: [squid-users] X-Forwarded-For + Squid Version 3.0.STABLE8

2011-02-20 Thread Pieter De Wit

Hi Amos,

just had a go at this:

request_header_access X-Forwarded-For deny
header_replace X-Forwarded-For

and it's still passing XFF from another source thru - Nothing to urgent 
since the Deb6 boxes are getting built :) But if you spot something ?


Cheers,

Pieter



Re: [squid-users] X-Forwarded-For + Squid Version 3.0.STABLE8

2011-02-20 Thread Pieter De Wit

On 21/02/2011 18:16, Amos Jeffries wrote:

On 21/02/11 16:33, Pieter De Wit wrote:

Hi Amos,

just had a go at this:

request_header_access X-Forwarded-For deny
header_replace X-Forwarded-For

and it's still passing XFF from another source thru - Nothing to urgent
since the Deb6 boxes are getting built :) But if you spot something ?


Just a typo missing all after the deny .

and no value to hard-code into the header on the replace line.

This one is tricky to use since you have to hard-code the value passed 
back, it wont contain the real client IP you want.


Amos
Yeah, not quite what we are after so squid 3.1.6 will have to do the 
trick :)


Thanks for the time !

Pieter


Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Pieter De Wit

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24 
--dport 80 -j REDIRECT --to-port 3128


Replace the 192.168 with your network. Keep in mind that you can have 
multiples of these :)


In a nutshell, IP Tables was making each request (even from the outside 
world) go via Squid.


The other solution is to process those via squid, which will take some 
load off the web servers.


Cheers,

Pieter

On 26/01/2011 06:43, Ben Greear wrote:

Hello!

We have a squid + bridge + transparent proxy working pretty
well.  It seems to be properly caching and dealing with data
when requests are coming from behind the bridge to the outside
world.

But, there are some web servers behind the bridge that should
be accessible to the outside world.  When the outside attempts
to access them, squid is attempting to cache those requests
as well.

Is there any way to just have squid handle traffic originating
on the inside?

We're using firewall rules like this:

/sbin/ebtables -t broute -A BROUTING -i br0 -p IPv4 --ip-protocol 6 
--ip-destination-port 80 -j redirect --redirect-target ACCEPT
/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp --dport 80 -j 
REDIRECT --to-port 3128


Thanks,
Ben





Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Pieter De Wit

Hi Ben,

On 26/01/2011 06:55, Ben Greear wrote:

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?

You can, but that is not guaranteed, since the source port should be 
assigned at random by the OS. Keep in mind that this will be 
Chrome/IE/Firefox/insert browser here that makes the connection. 
Having re-read your suggestion, are you not referring to the ethernet port ?

The other solution is to process those via squid, which will take some
load off the web servers.


I'm a bit out of the loop, but for whatever reason, the users don't
want this to happen.

Thanks for the quick response!

Ben






Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Pieter De Wit

Hi Ben,

I suspect that will do the trick :)

Let us know

Cheers,

Pieter

On Tue, 25 Jan 2011, Ben Greear wrote:


On 01/25/2011 10:36 AM, Ben Greear wrote:

On 01/25/2011 10:06 AM, Pieter De Wit wrote:

Hi Ben,

On 26/01/2011 06:55, Ben Greear wrote:

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?


You can, but that is not guaranteed, since the source port should be
assigned at random by the OS. Keep in mind that this will be
Chrome/IE/Firefox/insert browser here that makes the connection.
Having re-read your suggestion, are you not referring to the ethernet
port ?


I mean ethernet port/interface, something like '-i br0
--original-input-dev eth0'

If nothing comes to mind immediately, don't worry..I'll go read man
pages :)


Looks like '--physdev-in eth0'
might do the trick..we'll do some testing.

Thanks,
Ben



Thanks,
Ben





--
Ben Greear gree...@candelatech.com
Candela Technologies Inc  http://www.candelatech.com




Re: [squid-users] forwarding hostname to 2nd lan interface.

2010-04-19 Thread Pieter De Wit

Hi Moris,

A quick look over this and the problem is that you have two default 
gateways. I run a setup close to this (we have 3 default gateways) and 
without routing it using rt_tables and the likes you won't have any luck.


Do a tcpdump of the interfaces while requesting traffic and you will see 
what I mean, the packet will go out with the right source address, but the 
wrong interface.


If you need to setup the rest, contact me off list and we can see if my 
script will do it.


Cheers,

Pieter

On Tue, 20 Apr 2010, Moris Diu wrote:


Amos Jeffries 提到:

 EIN SA wrote:

 Hello all,
 I am looking for a solution to forward some specific hostname to my 2nd
 lan card.
 My network inferface
 eth0   192.168.2.80 (connect Internet by a dynamic real IP)
 eth1   192.168.11.240 (connect Internet by a fixed real IP)

 All user clients PC point to 192.168.2.80 and the default will go out 

to

 eth0. But if the user trying go to ebrary.com, I wish the routing will
 go to the eth1.

 I have follow setting at my squid.conf but it does not work

 acl To_ebrary dstdomain .ebrary.com
 acl From_ebrary srcdomain .ebrary.com
 tcp_outgoing_address 192.168.11.240 To_ebrary
 tcp_outgoing_address 192.168.11.240 From_ebrary


 Almost. Try this:

   acl To_ebrary dstdomain .ebrary.com
   tcp_outgoing_address 192.168.2.80 !To_ebrary
   tcp_outgoing_address 192.168.11.240 To_ebrary

 Amos

Hi Amos,
Thank you for your help and I had changed the config as your suggestion.
It still failed to route to 192.168.11.240. If type www.google.com at
IE, the traffic will go to 192.168.2.80. But type www.ebrary.com, and
the browser error message

ERROR
The requested URL could not be retrieved



The following error was encountered while trying to retrieve the URL:
http://www.ebrary.com/

Connection to 140.234.254.11 failed.




Follow is my network config and there is no iptable:
/tmp# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags   MSS Window  irtt
Iface
192.168.2.0 0.0.0.0 255.255.255.0   U 0 0  0
eth0
192.168.11.00.0.0.0 255.255.255.0   U 0 0  0
eth1
0.0.0.0 192.168.2.254   0.0.0.0 UG0 0  0
eth0
0.0.0.0 192.168.11.254  0.0.0.0 UG0 0  0
eth1








Re: [squid-users] Change tcp_outgoing_address every hour, best way to do this?

2009-10-13 Thread Pieter De Wit

Hi Andres,

I am assuming you want to do this for a Load Balance setup. I also doubt 
that you have 12 (or even 24) upstream connections. I would simply suggest 
that you run one main squid, assign some ACLs to it based on time and 2 or 
more parent proxies. Those parent proxies can run on the same machine, 
with little cache and memory, and they carry the other IPs in their own 
configs. That way, there is no reconfigure etc.


I have 4 squids running on one box for this purpose - details will be posted 
to here once the solution is in production :)


Cheers,

Pieter

- Original Message - 
From: Andres Salazar ndrsslz...@gmail.com

To: squid-users@squid-cache.org
Sent: Wednesday, October 14, 2009 05:19
Subject: [squid-users] Change tcp_outgoing_address every hour, best way to 
do this?




Hello,

Iam wanting to pass the option of tcp_outgoing_address when I run the
command to refresh or reload the config file. This so that every hour
I can rorate with a cron the IP that squid uses to browse the
internet.

Is this possible? Or is there a better way then to create dozens of
config files with the only difference being the IP?

Andres





Re: [squid-users] Change tcp_outgoing_address every hour, best way to do this?

2009-10-13 Thread Pieter De Wit

Hi Andres,

It's not a load issue, normally, the reason people would want to change 
the source address is to load balance links (in this kinda of setup) If 
you want to rotate the IP's just because then so be it :)


I would still suggest thou, that you run multiple squids on the box and 
rotate between them, since tcp_outgoing_address has no ACL control (that 
might be a feature request - Devs ? )


Like I said before, I will be publishing quite a bit of the work I did for 
this type of setup, but if you decide to go with this option, I can share 
the squid configs ahead of time.


Cheers,

Pieter

On Tue, 13 Oct 2009, Andres Salazar wrote:


Hello,

I actually have about 100 IPs. Squid can handle the load without
problem i just need to rotate the IPs for all users every hour the
tcp_outgoing_address  ?

I really dont have any idea on how to do this, any example would be
much appreciated.

Andres


On Tue, Oct 13, 2009 at 11:26 AM, Pieter De Wit pie...@insync.za.net wrote:

Hi Andres,

I am assuming you want to do this for a Load Balance setup. I also doubt
that you have 12 (or even 24) upstream connections. I would simply suggest
that you run one main squid, assign some ACLs to it based on time and 2 or
more parent proxies. Those parent proxies can run on the same machine,
with little cache and memory, and they carry the other IPs in their own
configs. That way, there is no reconfigure etc.

I have 4 squids running on one box for this purpose - details will be posted
to here once the solution is in production :)

Cheers,

Pieter

- Original Message - From: Andres Salazar ndrsslz...@gmail.com
To: squid-users@squid-cache.org
Sent: Wednesday, October 14, 2009 05:19
Subject: [squid-users] Change tcp_outgoing_address every hour, best way to
do this?



Hello,

Iam wanting to pass the option of tcp_outgoing_address when I run the
command to refresh or reload the config file. This so that every hour
I can rorate with a cron the IP that squid uses to browse the
internet.

Is this possible? Or is there a better way then to create dozens of
config files with the only difference being the IP?

Andres








Re: [squid-users] Change tcp_outgoing_address every hour, best way to do this?

2009-10-13 Thread Pieter De Wit
Ah - this is what I was looking formakes me wonder...i might change 
my setup :)


So tcp_outgoing_address supports ACL's tagged to it, so you could have 
something like


acl morning time 06:00-11:59
acl afternoon time 12:00-18:00
acl night time 18:00-06:00

tcp_outgoing_address 1.2.3.4 morning # IP used in the morning
tcp_outgoing_address 1.2.3.4 afternoon # IP used in the afternoon
tcp_outgoing_address 1.2.3.4 night # IP used in the night

Cheers,

Pieter

On Tue, 13 Oct 2009, Henrik Nordstrom wrote:


tis 2009-10-13 klockan 11:19 -0500 skrev Andres Salazar:

Hello,

Iam wanting to pass the option of tcp_outgoing_address when I run the
command to refresh or reload the config file. This so that every hour
I can rorate with a cron the IP that squid uses to browse the
internet.

Is this possible? Or is there a better way then to create dozens of
config files with the only difference being the IP?


I would set up a included squid.conf snippet with 24
tcp_outgoing_address settings (one per hour, selected by acl) and update
this file nightly to assign a new set of IP addresses for the next day.

generate_random_outgoing.sh

#!/bin/sh
top=`dirname $0`
HOUR=0
cat $1 | sort -R | while [ $HOUR -lt 24 ]  read ip; do
 printf acl hour_%d time %02d:00-%02d:59\n $HOUR $HOUR $HOUR
 printf tcp_outgoing_address %s hour_%d\n $ip $HOUR
 HOUR=`expr $HOUR + 1`
done

Usage:
generate_random_outgoing.sh /path/to/file_with_ipaddresses.txt 
/path/to/etc/squid/random_outgoing.conf
squid -k reconfigure

and in squid.conf

include /path/to/etc/squid/random_outgoing.conf

Regards
Henrik




[squid-users] squid -k cmd with multiple copies running

2009-10-13 Thread Pieter De Wit

Hi Guys,

I have google'ing it for a bit now and I can't seem to find an answer. I 
have 4 copies of squid running on my box (6gig RAM/160gig disk space - 
1gig ram per squid, 16gig disk space)


I am busy adjusting the startup scripts (gentoo) to cope with this. 
Currently I am starting them in a detached screen.


My problems comes with understanding how squid is going to handle the -k 
commands, like reconfigure, rotate and the likes.


If I say something like squid -k reconfigure -f 
/etc/squid/squid_other.conf (which has a different pid setting) will it 
send the single to the other squid, or all of them ?


Thanks !

Pieter


[squid-users] delay_pools on aborted objects

2009-04-14 Thread Pieter De Wit

Hey Guys,

What does squid do when a request is aborted and it's meant to carry on 
downloading the object (via quick_abort) and the client was part of a 
delay_pool.


e.g. Client - delay_pool - Squid # Downloads at dp speed

 Client --x-- delay_pool - Squid # Now what speed ?

Cheers,

Pieter


Re: [squid-users] squid - loading, checking and purging

2009-04-14 Thread Pieter De Wit

Hi :)

1 - I *think* webmin has this feature - not sure how to do it directly 
with squid.


2 - As above

3 - export HTTP_PROXY=squid:port wget all the urls

3a - It wont be 100% sure if the object will be stored as squid will work 
it out (using policies etc)


Just a bit of help :)

Cheers,

Pieter

On Tue, 14 Apr 2009, Sir June wrote:



Hi,

I just joined this mailing list and i'd like to get insights on how to do the 
following?

1)  how to check if an object or a URL  is in the squid cache? 

2)  how to purge an object or a URL from the squid cache?

3)  if i have a long list (1000 items)  of objects/url that i want to load into 
the cache, how do i load it?


thanks,
sirjune





Re: [squid-users] Active - Active

2009-04-06 Thread Pieter De Wit

Hi Graham,

That is correct - but since I would like to run a transparent proxy (yes - 
I *could* redirect off the box) I would prefer to keep it on the boxes.


They are going to be beefy boxes to say the least, so might as well use 
them while we can :)


I spoke to the guys and they are happy to have the active tcp session 
fail if one of the boxes dies. They don't do loads of big downloads so the 
chance that a client will see the failure is very little.


Come to think of it, the only people that will do big downloads are the IT 
Staff (drivers, SP etc) and if those boxes fail, they will have more to 
worry about ;)


Re-reading your email - yes - squid on a private LAN wouldn't even see the 
failure, except for the slight delay with TCP ACK's etc restarting the 
connection (any active connections ) - I havn't found a way around that, 
but I think that might be drifting off-topic


Cheers,

Pieter

On Tue, 7 Apr 2009, graham wrote:


Hello Pieter,
The failover requirement that you describe looks remarkably like one of
the configurations commonly used by Astaro firewall devices.
If you were to conceptually remove the squid function from the failover,
ie in the simplest case onto another device on the private LAN, then an
active-standby pair of firewalls, with common public and private
addresses would be transparent to squid - wouldn't it ?
cheers
Graham
===
On Mon, 2009-04-06 at 03:21 +0200, Pieter De Wit wrote:

When you are confidant about this going, we can move on to the HTTPS and
failover questions.

Amos



Hi Guys,

Sorry that I am dropping in on this thread, but it reminded me that I
need to find this out.

I am working on a active-active firewall for a customer. It will be two
Linux boxes (Gentoo for now) running VRRP to publish a virtual IP. I have
done the firewall setup so that connections can failover between the boxes
(takes about 30 seconds - I am sure the heartbeat can be set to less) but
it works ok :)

Now - the tricker part. Let say someone is currently busy with a download,
can squid do a failover of the connection ? If so, mind pointing me to the
setup docs ?

If this is going to be a feature to add to squid, then I am happy to take
it to the dev mailing list and propose something there.

Please accept my best attempt at ASCII art :)

|eth2 |eth2
___|___   ___|___
|NODE1|   |NODE2|
| |--eth1---eth1--| |
---|---   ---|---
|eth0 |eth0


eth0 - Private LAN
eth1 - heartbeat,failover and ICP LAN
eth2 - Internet

Cheers,

Pieter






Re: [squid-users] Can a guru verify my config?

2009-04-05 Thread Pieter De Wit

When you are confidant about this going, we can move on to the HTTPS and
failover questions.

Amos



Hi Guys,

Sorry that I am dropping in on this thread, but it reminded me that I 
need to find this out.


I am working on a active-active firewall for a customer. It will be two 
Linux boxes (Gentoo for now) running VRRP to publish a virtual IP. I have 
done the firewall setup so that connections can failover between the boxes 
(takes about 30 seconds - I am sure the heartbeat can be set to less) but 
it works ok :)


Now - the tricker part. Let say someone is currently busy with a download, 
can squid do a failover of the connection ? If so, mind pointing me to the 
setup docs ?


If this is going to be a feature to add to squid, then I am happy to take 
it to the dev mailing list and propose something there.


Please accept my best attempt at ASCII art :)

   |eth2 |eth2
___|___   ___|___
|NODE1|   |NODE2|
| |--eth1---eth1--| |
---|---   ---|---
   |eth0 |eth0


eth0 - Private LAN
eth1 - heartbeat,failover and ICP LAN
eth2 - Internet

Cheers,

Pieter


[squid-users] Fail-over config

2009-04-05 Thread Pieter De Wit
Geez - what an off day - forgot to change the Subject and to add that it's 
going to be working as a transparent proxy.


Thanks,

Pieter

On Mon, 6 Apr 2009, Pieter De Wit wrote:


 When you are confidant about this going, we can move on to the HTTPS and
 failover questions.

 Amos



Hi Guys,

Sorry that I am dropping in on this thread, but it reminded me that I need 
to find this out.


I am working on a active-active firewall for a customer. It will be two 
Linux boxes (Gentoo for now) running VRRP to publish a virtual IP. I have 
done the firewall setup so that connections can failover between the boxes 
(takes about 30 seconds - I am sure the heartbeat can be set to less) but it 
works ok :)


Now - the tricker part. Let say someone is currently busy with a download, 
can squid do a failover of the connection ? If so, mind pointing me to the 
setup docs ?


If this is going to be a feature to add to squid, then I am happy to take it 
to the dev mailing list and propose something there.


Please accept my best attempt at ASCII art :)

  | eth2 |eth2
___|___   ___|___
| NODE1|   |NODE2|
| | --eth1---eth1--| |
---|---   ---|---
  | eth0 |eth0


eth0 - Private LAN
eth1 - heartbeat,failover and ICP LAN
eth2 - Internet

Cheers,

Pieter



Re: [squid-users] Squid, Symantec LiveUpdate, and HTTP 1.1 versus HTTP 1.0

2009-03-27 Thread Pieter De Wit

Hi,

iptables can match a DNS name so you can use that and just restart the 
firewall if they mess it up.


If you do something like

iptables -t nat -a dst not liveupdate.s.com -j REDIRECT

it should work - it will make multiple rules and add them the the chain.

Not sure on the real command line but email me if you are stuck.

Cheers,

Pieter

- Original Message - 
From: Wong wongb...@telkom.net

To: Marcus Kool marcus.k...@urlfilterdb.com
Cc: Squid-users squid-users@squid-cache.org
Sent: Friday, March 27, 2009 7:40 PM
Subject: Re: [squid-users] Squid, Symantec LiveUpdate, and HTTP 1.1 versus 
HTTP 1.0




Dear all,

I found that Symantec LU has round robin DNS. And they can change DNS A
record at anytime.

Isn't it better if Squid can bypass the domain name in squid.conf?
Is it possible?

Wong

===snip===

[r...@squid root]# nslookup liveupdate.symantec.com
Server: 192.168.1.1
Address:192.168.1.1#53

Non-authoritative answer:
liveupdate.symantec.com canonical name = liveupdate.symantec.d4p.net.
liveupdate.symantec.d4p.net canonical name =
symantec.georedirector.akadns.net.
symantec.georedirector.akadns.net   canonical name = 
a568.d.akamai.net.

Name:   a568.d.akamai.net
Address: 60.254.140.170
Name:   a568.d.akamai.net
Address: 60.254.140.177
Name:   a568.d.akamai.net
Address: 60.254.140.179
Name:   a568.d.akamai.net
Address: 60.254.140.160
Name:   a568.d.akamai.net
Address: 60.254.140.171
Name:   a568.d.akamai.net
Address: 60.254.140.161

- Original Message - 
From: Marcus Kool marcus.k...@urlfilterdb.com

To: Nathan Eady galionlibr...@gmail.com
Cc: squid-users@squid-cache.org
Sent: Thursday, March 26, 2009 04:09
Subject: Re: [squid-users] Squid, Symantec LiveUpdate, and HTTP 1.1 versus
HTTP 1.0



The story about Squid and HTTP 1.1 is long...

To get your LiveUpdate working ASAP you might want to
fiddle with the firewall rules and to NOT redirect
port 80 traffic of Symantec servers to Squid, but
simply let the traffic pass.

Nathan Eady wrote:

Okay, we've got port 80 traffic going transparently to a Squid proxy
here, and I need to make a small configuration change, and I can't
seem to find, either in the man pages nor on the web, the
documentation on how to do it.  It's probably one little line in
squid.conf, but I can't find it.

Here's the deal:
When I access a site (I tested with Google as well as our own offsite
web server) from a computer that is NOT behind the transparent squid
proxy, issuing an HTTP/1.1 request, I get the normal expected HTTP/1.1
response:

nat...@externalbox$ telnet www.galionlibrary.org 80
Trying 209.143.16.23...
Connected to galionlibrary.org.
Escape character is '^]'.
GET / HTTP/1.1
Host: www.galionlibrary.org

HTTP/1.1 200 OK
[snip the rest]

However, when I do the same thing from a system that IS behind the
proxy, I get an HTTP/1.0 response back:
nat...@donalbain:~$ telnet www.galionlibrary.org 80
Trying 209.143.16.23...
Connected to galionlibrary.org.
Escape character is '^]'.
GET / HTTP/1.1
Host: www.galionlibrary.org

HTTP/1.0 200 OK
[snip the rest]

Until recently I never even noticed this, but now Symantec LiveUpdate
is failing on all the systems behind the proxy.  I posted about that
on the Norton Community forum, umm, here:
http://community.norton.com/norton/board/message?board.id=nis_feedbackmessage.id=42361

The long and short of that thread is that recent updates to LU have
caused it to no longer support HTTP 1.0.  The LU servers are all HTTP
1.1, and now the client requires this.  Our setup is not the only
thing breaking as a result (apparently, the built-in firewalls on
some home routers also have problems with it), but now that I'm aware
Squid is doing this, it ought to be easy to make some small change in
the configuration and get it to return HTTP 1.1 responses, at least
when the server does -- right?

But I'm coming up blank on how.

One other note:  the version of Squid we have, for reasons that aren't
worth going into here, is I believe somewhat outdated (-v says
2.5.STABLE13).  But HTTP 1.1 is certifiably older than dirt, so I'd be
extremely amazed if the Squid that we have doesn't support it...
We're going to update it hopefully pretty soon, but getting LiveUpdate
working again is significantly more urgent (and, hopefully, easier;
updating Squid in our case  probably means a fresh OS install...)

So where and how do I configure what Squid does with HTTP versions?
Where is this documented?

TIA,

Nathan Eady
Technology Coordinator
Galion Public Library











Re: [squid-users] mysterious crashes

2009-03-10 Thread Pieter De Wit

Hi Hoover,

Just a thought - what is the memory limit set to in squid and are other
services like gkrellmd running ?

Cheers,

Pieter

On Tue, 10 Mar 2009 15:43:17 -0700 (PDT), Hoover Chan c...@sacredsf.org
wrote:
 It looks like Squid is what's crashing (I left a terminal session open
with
 top running) but it's dragging the whole OS down with it to the point
 where the only way out is to reset or power cycle the computer.
 
 Very frustrating.
 
 
 -- 
 Hoover Chan c...@sacredsf.org 
 Technology Director 
 Schools of the Sacred Heart 
  Broadway St. 
 San Francisco, CA 94115
 
 
 - Rick Chisholm rchish...@parallel42.ca wrote:
 
 might be worthwhile to run memtest86 against your server to rule out 
 memory issues, esp. since you appear to have clear logs.  Is squid 
 crashing or is the OS locking up?
 
 Hoover Chan wrote:
  Hi, I'm new to this mailing list and relatively new to managing
 Squid.
 
  I'm running a Squid cache using version 2.5 and 1 Gb of RAM. I'm
 running into a problem where the system crashes so hard that the only
 way to bring it back up is to power cycle the server. Subsequent
 examination of the log files don't reveal any diagnostic information.
 The logs seem to show that the system is running just fine without
 incident.
 
  Any thoughts on what to look at? It's happening at least once a week
 now.
 
  Thanks in advance.
 
 
  -- 
  Hoover Chan c...@sacredsf.org 
  Technology Director 
  Schools of the Sacred Heart 
   Broadway St. 
  San Francisco, CA 94115
 
 
 


Re: [squid-users] Squid3 just Died

2008-12-09 Thread Pieter De Wit

Hi,

Might be totally off here, but I noted your swap size is large. Could it
be that the cache has more objects (in count and in byte count ?) than can
fit into a 32-bit counter ?

I got to this by seeing that it crashes at the cache rebuild section as
well as the fact the the build is i486.

Like I said, might be *way* off but hey :)

Cheers,

Pieter

On Tue, 09 Dec 2008 19:04:54 -0700, [EMAIL PROTECTED] wrote:
 Hello.
 
 I came across something weird. Squid3 just stopped working, just dies
 without any error message. My server was running as usual and all over a
 sudden users weren't getting internet. I checked if all the normal
 processes were running and noticed squid wasn't. Now, I try to start the
 server and it starts and dies after a few seconds. Heres part of the
 cache.log file:
 
 2008/12/09 22:03:07| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:07| Process ID 4063
 2008/12/09 22:03:07| With 1024 file descriptors available
 2008/12/09 22:03:07| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:07| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:07| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:07| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:07| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:07| Target number of buckets: 393846
 2008/12/09 22:03:07| Using 524288 Store buckets
 2008/12/09 22:03:07| Max Mem  size: 102400 KB
 2008/12/09 22:03:07| Max Swap size: 10240 KB
 2008/12/09 22:03:07| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:07| Using Least Load store dir selection
 2008/12/09 22:03:07| Current Directory is /
 2008/12/09 22:03:07| Loaded Icons.
 2008/12/09 22:03:07| Accepting transparently proxied HTTP connections at
 192.168.2.1, port 3128, FD 15.
 2008/12/09 22:03:07| HTCP Disabled.
 2008/12/09 22:03:07| WCCP Disabled.
 2008/12/09 22:03:07| Ready to serve requests.
 2008/12/09 22:03:11| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:11| Process ID 4066
 2008/12/09 22:03:11| With 1024 file descriptors available
 2008/12/09 22:03:11| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:11| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:11| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:11| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:11| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:11| Target number of buckets: 393846
 2008/12/09 22:03:11| Using 524288 Store buckets
 2008/12/09 22:03:11| Max Mem  size: 102400 KB
 2008/12/09 22:03:11| Max Swap size: 10240 KB
 2008/12/09 22:03:11| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:11| Using Least Load store dir selection
 2008/12/09 22:03:11| Current Directory is /
 2008/12/09 22:03:11| Loaded Icons.
 2008/12/09 22:03:11| Accepting transparently proxied HTTP connections at
 192.168.2.1, port 3128, FD 15.
 2008/12/09 22:03:11| HTCP Disabled.
 2008/12/09 22:03:11| WCCP Disabled.
 2008/12/09 22:03:11| Ready to serve requests.
 2008/12/09 22:03:12| Store rebuilding is  0.6% complete
 2008/12/09 22:03:17| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:17| Process ID 4069
 2008/12/09 22:03:17| With 1024 file descriptors available
 2008/12/09 22:03:17| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:17| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:17| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:17| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:17| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:17| Target number of buckets: 393846
 2008/12/09 22:03:17| Using 524288 Store buckets
 2008/12/09 22:03:17| Max Mem  size: 102400 KB
 2008/12/09 22:03:17| Max Swap size: 10240 KB
 2008/12/09 22:03:17| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:17| Using Least Load store dir selection
 2008/12/09 22:03:17| Current Directory is /
 2008/12/09 22:03:17| Loaded Icons.
 2008/12/09 22:03:17| Accepting transparently proxied HTTP connections at
 192.168.2.1, port 3128, FD 15.
 2008/12/09 22:03:17| HTCP Disabled.
 2008/12/09 22:03:17| WCCP Disabled.
 2008/12/09 22:03:17| Ready to serve requests.
 2008/12/09 22:03:18| Store rebuilding is  0.6% complete
 
 
 Please help. Your help will be appreciated.
 
 Thank you in advanced.



Re: [squid-users] Squid3 just Died

2008-12-09 Thread Pieter De Wit

Well - something is killing it. It got a lot future than before, it stopped
at 0.6% iirc last time ?

On Tue, 09 Dec 2008 22:42:52 -0400, Wilson Hernandez - MSD, S. A.
[EMAIL PROTECTED] wrote:
 That i486 thing just might have been the original kernel. I don't know 
 why it says i486.
 
 I ran tail -f /var/log/squid/cache.log and noticed that squid tries to 
 rebuild the cache, it stops and restarts again:
 
 Store rebuilding is 10.1% complete
 
 I don't want to delete my cache. That's the only solutions I've found on 
 the internet:
 
 1) Shutdown your squid server
 squid -k shutdown
 
 2) Remove the cache directory
 rm -r /squid/cache/*
 
 3) Re-Create the squid cache directory
 squid -z
 
 4) Start the squid
 
 My cache is pretty big and it would take a while to delete all the stuff 
 in there. Also, I will loose all that data from months of objects...
 
 Pieter De Wit wrote:
 Hi,
 
 Might be totally off here, but I noted your swap size is large. Could
 it
 be that the cache has more objects (in count and in byte count ?) than
 can
 fit into a 32-bit counter ?
 
 I got to this by seeing that it crashes at the cache rebuild section as
 well as the fact the the build is i486.
 
 Like I said, might be *way* off but hey :)
 
 Cheers,
 
 Pieter
 
 On Tue, 09 Dec 2008 19:04:54 -0700, [EMAIL PROTECTED] wrote:
 Hello.

 I came across something weird. Squid3 just stopped working, just dies
 without any error message. My server was running as usual and all over
 a
 sudden users weren't getting internet. I checked if all the normal
 processes were running and noticed squid wasn't. Now, I try to start
 the
 server and it starts and dies after a few seconds. Heres part of the
 cache.log file:

 2008/12/09 22:03:07| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:07| Process ID 4063
 2008/12/09 22:03:07| With 1024 file descriptors available
 2008/12/09 22:03:07| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:07| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:07| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:07| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:07| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:07| Target number of buckets: 393846
 2008/12/09 22:03:07| Using 524288 Store buckets
 2008/12/09 22:03:07| Max Mem  size: 102400 KB
 2008/12/09 22:03:07| Max Swap size: 10240 KB
 2008/12/09 22:03:07| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:07| Using Least Load store dir selection
 2008/12/09 22:03:07| Current Directory is /
 2008/12/09 22:03:07| Loaded Icons.
 2008/12/09 22:03:07| Accepting transparently proxied HTTP connections
 at
 192.168.2.1, port 3128, FD 15.
 2008/12/09 22:03:07| HTCP Disabled.
 2008/12/09 22:03:07| WCCP Disabled.
 2008/12/09 22:03:07| Ready to serve requests.
 2008/12/09 22:03:11| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:11| Process ID 4066
 2008/12/09 22:03:11| With 1024 file descriptors available
 2008/12/09 22:03:11| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:11| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:11| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:11| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:11| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:11| Target number of buckets: 393846
 2008/12/09 22:03:11| Using 524288 Store buckets
 2008/12/09 22:03:11| Max Mem  size: 102400 KB
 2008/12/09 22:03:11| Max Swap size: 10240 KB
 2008/12/09 22:03:11| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:11| Using Least Load store dir selection
 2008/12/09 22:03:11| Current Directory is /
 2008/12/09 22:03:11| Loaded Icons.
 2008/12/09 22:03:11| Accepting transparently proxied HTTP connections
 at
 192.168.2.1, port 3128, FD 15.
 2008/12/09 22:03:11| HTCP Disabled.
 2008/12/09 22:03:11| WCCP Disabled.
 2008/12/09 22:03:11| Ready to serve requests.
 2008/12/09 22:03:12| Store rebuilding is  0.6% complete
 2008/12/09 22:03:17| Starting Squid Cache version 3.0.PRE5 for
 i486-pc-linux-gnu...
 2008/12/09 22:03:17| Process ID 4069
 2008/12/09 22:03:17| With 1024 file descriptors available
 2008/12/09 22:03:17| DNS Socket created at 0.0.0.0, port 33054, FD 8
 2008/12/09 22:03:17| Adding nameserver 200.42.213.11 from squid.conf
 2008/12/09 22:03:17| Adding nameserver 200.42.213.21 from squid.conf
 2008/12/09 22:03:17| Unlinkd pipe opened on FD 13
 2008/12/09 22:03:17| Swap maxSize 10240 KB, estimated 7876923
 objects
 2008/12/09 22:03:17| Target number of buckets: 393846
 2008/12/09 22:03:17| Using 524288 Store buckets
 2008/12/09 22:03:17| Max Mem  size: 102400 KB
 2008/12/09 22:03:17| Max Swap size: 10240 KB
 2008/12/09 22:03:17| Rebuilding storage in /var/log/squid/cache (DIRTY)
 2008/12/09 22:03:17| Using Least Load store dir selection
 2008/12/09 22:03:17| Current Directory is /
 2008

Re: [squid-users] Question

2008-11-06 Thread Pieter De Wit

Amos Jeffries wrote:

Monah Baki wrote:

Hi all,

We have 2 squid servers running 2.7 stable 5. One is locally in our 
data center, the other is located remotely on the clients network. Is 
it possible to have whatever cached objects our local server has be 
replicated on the client?


Each squid caches what it can from any data source.
  If traffic flows through your squid on its way to the client, the 
objects will propagate down.
Or, if you setup both squid as peers with client squid to prefer 
sourcing data from your squid's cache, it will propagate across.


I think what Monah wanted was the ICP setup. Look at 
http://www.visolve.com/squid/squid24s1/neighbour.php they explain all 
the cache_peer settings there. iirc, you are looking for sibling


If yes, in my squid.conf, what should I look for?



The above link has examples
depends on your cache_peer config. PARENT_HIT or SIBLING_HIT or 
similar source retrieval entries when the objects were retrieved.


They become regular local TCP*_HIT entries while cached.

Amos


Cheers !


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-08 Thread Pieter De Wit

RM wrote:

On Mon, Oct 6, 2008 at 4:08 AM, RM [EMAIL PROTECTED] wrote:
  

On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit [EMAIL PROTECTED] wrote:


Hi JL,

Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?

If he downloads a big file, does the speed pick up ?

Cheers,

Pieter

JL wrote:
  

I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experiencing very slow speeds. Other users performing the very same
activities do not experience the slow speeds, myself included. I asked
the slow user to do traceroutes and it appeared there were no network
routing issues but for some reason it is VERY slow for him to the
point of being unusable. The slow user can perform the same exact
activities perfectly fine using another proxy service but with my
proxy it is too slow.

Any help is appreciated.


  

Thanks Pieter for the reply.

I am not sure what you mean by DNS in its logging. I am assuming you
mean that in the logs hostnames as opposed to IP addresses are logged.
If so, that is not the case, only IP addresses are logged in the Squid
logs. I realize you are probably are also referring to reverse DNS for
the user but just in case you mean reverse DNS for the server, I do
have reverse DNS setup for the server IP's.

I will have to ask to see if big downloads speed up for the user.

Any other help is appreciated.




One thing I forgot to ask is: if he downloads a big file and the speed
picks up, what does this say and how do I fix the problem?

Any other suggestions are appreciated as well.
  
This would mean that the problem is related to logging or somethings 
down those lines, it only happens at the start of the connection/request.


Cheers,

Pieter


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread Pieter De Wit

Hi JL,

Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?

If he downloads a big file, does the speed pick up ?

Cheers,

Pieter

JL wrote:

I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experiencing very slow speeds. Other users performing the very same
activities do not experience the slow speeds, myself included. I asked
the slow user to do traceroutes and it appeared there were no network
routing issues but for some reason it is VERY slow for him to the
point of being unusable. The slow user can perform the same exact
activities perfectly fine using another proxy service but with my
proxy it is too slow.

Any help is appreciated.
  




Re: [squid-users] Adding secondary Disk for Cache

2008-08-19 Thread Pieter De Wit

Juan C. Crespo R. wrote:

Dears

   I want to ask if anyone could tell me an easy way (step by step) to 
add a secondary disk to the squid Cache


Thanks

Hi,

Simply add the drive to the OS as per normal - then add another 
cache_dir line to squid.conf.


Cheers,

Pieter


Re: [squid-users] Adjusting Parent Cache weight based on acl

2007-12-14 Thread Pieter De Wit
Hi Amos,

Thanks for the reply, so it seems that squid already does what I need
(in a way). Would you mind expanding on the data accounting comment, All
I could find on google was ip accounting in squid. Like I said, the
servers arn't ready yet so I can't test what I need to, but so far it's
looking good :)

Thanks,

Pieter

Amos Jeffries wrote:

 This is done directly allow.deny to any given peer via ACLs already,
 and indirectly via sucessfull data accounting which modifies the
 weighting.
 or I want to say something
 like, when Client A requests it from username user and from IP a.b.c.d
 (say a dial up) then decrease the weight of the adsl proxy.

 This is already implemented in all weighted-peering algorithms in squid.

 cache_peer_access allows/prevents any data being retrieved from a
 peer. Each time data is successfully retrieved it adds to the
 weighting of the useful source peer.




[squid-users] Adjusting Parent Cache weight based on acl

2007-12-13 Thread Pieter De Wit
Hello Everyone,

Although I haven't reached the stage yet of needing the following
feature I thought I might as well start talking about it soon. I would
like to suggest (if there isn't already a way of doing this) the
following idea for Squid:

Adjusting a Parent Cache's weight based on acl - What this means is the
following:

I have a main proxy server called (let's say) main_proxy. I have two
sibling proxy servers called child1_proxy and child2_proxy. Child1 and 2
proxies both have there own internet link of different sizes (the one is
adsl and the other one a E1). Now to balance requests between them is
simple, just add them with the same weight. To use one for a set of
users etc is simple. What I would like to do is dynamically control the
weight of each cache, based on acl's

Let's say Client A is an exec and needs high speed caching, I want some
requests to go over the adsl and some over the E1. Now I would like to
do this during some time or something else...all do'able with the
current acl's, but what if I want to change the proxy based on
system/network load or some external factor, or I want to say something
like, when Client A requests it from username user and from IP a.b.c.d
(say a dial up) then decrease the weight of the adsl proxy.

I hope this is making sense, since I feel like i havn't really carried
over the idea correctly.

Thanks,

Pieter De Wit




RE: [squid-users] Howto Allow 1 and block another

2005-12-10 Thread Pieter De Wit
Hey Barry,

Try this:

acl julie src 192.168.1.20/32
acl jess src 192.168.1.12/32

HTH,

Pieter

-Original Message-
From: Barry Rumsey [mailto:[EMAIL PROTECTED] 
Sent: 2005/12/11 05:12
To: Squid
Subject: [squid-users] Howto Allow 1 and block another

I am trying to setup squid to allow one user by ip address and block
another user on another ip address.
This is what I have so far.

acl julie src 192.168.1.20/24
acl jess src 192.168.1.12/24

http_access allow localhost
http_access allow julie
http_access deny jess

# And finally deny all other access to this proxy http_access deny all

Thanks for any help in advance
B.Rumsey
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


RE: [squid-users] adaptive bandwidth based on requested file size

2005-12-06 Thread Pieter De Wit
Hey A.

I would think that you would have to look into either iptables/CBQ or
delay pools - I am sure I read somewhere that you can have acl's based
on the reply-body size.

Cheers,

Pieter 

-Original Message-
From: A. Laksmana [mailto:[EMAIL PROTECTED] 
Sent: 2005/12/04 05:13
To: squid-users@squid-cache.org
Subject: [squid-users] adaptive bandwidth based on requested file size

Is it possible for squid to limit bandwidth based on requested file size
with delay_pool?
So, if A downloads 3 Mb file he will get less bandwidth than B who
downloads 300 Kb file.

If not, is there any clue how to do it?



squid2.5-10, shorewall2.2

rgrds
A. Laksmana
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


RE: [squid-users] autoconfig pac file

2005-11-23 Thread Pieter De Wit
Hello Toto,

Using a normal browser, try and download the file - something like:

wget http://10.1.1.13/proxy/proxy.pac

I think the problem lies with apache2 rather than squid or the file.

Cheers,

Pieter

-Original Message-
From: Toto Carpaccio [mailto:[EMAIL PROTECTED] 
Sent: 2005/11/23 14:18
To: squid-users@squid-cache.org
Subject: [squid-users] autoconfig pac file

Hi,

I'm using squid installed on a debian server. I've installed Apache2
(checked the pac extension in mimes.conf too) also, and create a
directory in /var/www/ called proxy where i put a proxy.pac file
containing :

function FindProxyForURL(url, host)
{
if (isInNet(host, 10.2.0.0, 255.255.0.0))
return PROXY 10.1.1.13:3128;
else
return DIRECT;
}

I want users of network 10.2.0.0/16 to use proxy and other to go
diretcly.

I've tried to configure IExplorer so the browser reach for autoconfig
file located in http:10.1.1.13/proxy/proxy.pac

I can't make it work properly, all connections are sent directly to
internet.

Can you please help ?

Thanks.
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


RE: [squid-users] Pipeline between two caches

2005-11-18 Thread Pieter De Wit
Hello Christoph,

That I have, but I have 10-12 connections between the server here and
the remote one. I would like for that to be 1 connection.

Thanks,

Pieter 

-Original Message-
From: Christoph Haas [mailto:[EMAIL PROTECTED] 
Sent: 2005/11/18 11:36
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Pipeline between two caches

On Friday 18 November 2005 07:25, Pieter De Wit wrote:
 I have two proxy (one remote - both squid). I was wondering if I can 
 pipeline the two, or at least get the number of connections between 
 the two down. Can they be connected via a single TCP connection ?

You probably want to create a proxy chain. See cache_peer in the
documentation.

Regards
 Christoph
--
~
~
.signature [Modified] 2 lines --100%--2,41 All
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


[squid-users] Pipeline between two caches

2005-11-17 Thread Pieter De Wit
Hello List,
 
I have two proxy (one remote - both squid). I was wondering if I can
pipeline the two, or at least get the number of connections between the
two down. Can they be connected via a single TCP connection ?

Thanks,

Pieter
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


[squid-users] external_acl_type

2005-11-15 Thread Pieter De Wit
Hello List,
 
Can someone please point me to a resource describing what an external
acl program must return and how. In my quest to bind an IP to a
username I have created the following:
 
ip_to_user.sh
#!/bin/bash
 
while [ 1 ]
do
while read ip
do
ip_done=0;
 
echo Auth'ing $ip...  /var/log/ip_to_user.log
 
if [ $ip = 1.2.3.4 ]; then
echo OK user=user
echo OK user=user  /var/log/ip_to_user.log
ip_done=1;
fi
done
done

That seems to work, but at some stage all 5 of them use 100% CPU - even
when they are not auth'ing.

Did I miss something ?

Thanks,

Pieter
“This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx 


[squid-users] Binding IP address to username

2005-11-09 Thread Pieter De Wit
Hello Everyone,

I would like to know how I can bind an IP address to a username in squid. So 
let's say I have a user called user1 and a machine on IP 1.2.3.4. I would like 
squid to log any requests that come from 1.2.3.4 as if the user user1 logged in.

Thanks,

Pieter De Wit

�This e-mail is sent on the Terms and Conditions that can be accessed by 
Clicking on this link http://www.vodacom.net/legal/email.aspx