[squid-users] IOWAIT and second disk

2006-09-19 Thread Michał Margula

Hello!

	I have a aufs on /dev/sdc. Iowait takes about 40% of CPU. If I add 
another disk, will it make that load lower? Is it good idea to use two 
disks with aufs (diskd is unstable unfortunately)?


--
Michał Margula, [EMAIL PROTECTED], http://alchemyx.uznam.net.pl/
W życiu piękne są tylko chwile [Ryszard Riedel]


Re: [squid-users] IOWAIT and second disk

2006-09-19 Thread Adrian Chadd
On Tue, Sep 19, 2006, Micha? Margula wrote:
 Hello!
 
   I have a aufs on /dev/sdc. Iowait takes about 40% of CPU. If I add 
 another disk, will it make that load lower? Is it good idea to use two 
 disks with aufs (diskd is unstable unfortunately)?

Yes, AUFS is fine.

Take a look at http://www.squid-cache.org/~adrian/coss/ - note the dramatic
drop in IOWAIT when using COSS.

It also depends on the hardware you're using. What kind of disks are they?
Which OS? Which controller?



Adrian



Re: [squid-users] [squid-users]: WARNING: Disk space over limit: 31457564 KB 31457280 KB. How to avoid it?

2006-09-19 Thread Matus UHLAR - fantomas
 m?n 2006-09-18 klockan 10:31 -0700 skrev Pranav Desai:
  I am running some polymix-4 tests with squid 2.6-S3. Every single test
  I have run so far has these warning. Some FAQs and other searches
  suggest that this is caused by a corrupted swap.state, and the
  solution is to wipe out the cache dirs and rebuild the cache.

what are your cache_swap_low, cache_swap_high and maximum_object_size
settings?

 On 9/18/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 Small amounts above the limit is simply that things runs a bit too quick
 for a while with the garbage collection not keeping up.
 
 Very large amounts above the limit (often many times the amount of
 storage available) is corrupted swap.state.

On 18.09.06 16:40, Pranav Desai wrote:
 Could any of the above have any impact on the performance.

no, unless your silesystem is 90% full. UN*X filesystems behave faster if
they enough of free spave. I notices performance degradation when disk
became 90% full, currently it's around 85% which is quite ok I'd say.

 Does the LRU starts after the disk is full and can that have any impact. I
 am just thinking out loud ... will try some more tests ...

try using heap replacement policy. I use heap ldufa for disk, even heap LRU
is faster than old non-heap lru.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.


Re: [squid-users] Efficiency of around 10%, is that normal

2006-09-19 Thread Matus UHLAR - fantomas
On 18.09.06 13:21, Bas Rijniersce wrote:
 I was used to getting efficiencies of around 50%. My current setup does
 not get past 10% as reported by Calamaris. There are around 40 people
 behind the proxy. After Googling around I added the following lines to
 squid.conf:
  
 cache_mem 256 MB
 maximum_object_size 32 MB
 maximum_object_size_in_memory 128 KB
 
 System has 512Mb of memory and very little other tasks. Is there
 anything else I can do to improve efficieny?

what are sizes of your cache_dirs? what replacement policy do you use?
heap GSDF or LFUDA policies help improve hit/byte ratios. But you needed
enough of disk space to cache.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Where do you want to go to die? [Microsoft]


Re: [squid-users] IOWAIT and second disk

2006-09-19 Thread Michał Margula

Adrian Chadd wrote:

On Tue, Sep 19, 2006, Micha? Margula wrote:

Hello!

	I have a aufs on /dev/sdc. Iowait takes about 40% of CPU. If I add 
another disk, will it make that load lower? Is it good idea to use two 
disks with aufs (diskd is unstable unfortunately)?


Yes, AUFS is fine.

Take a look at http://www.squid-cache.org/~adrian/coss/ - note the dramatic
drop in IOWAIT when using COSS.

It also depends on the hardware you're using. What kind of disks are they?
Which OS? Which controller?


Linux 2.6.15.1, /dev/sdc with cache is Seagate ST336754LC (Cheetach) - 
36 GB, 15k RPMs, and logging is done on /dev/md0 which is RAID1 
consisting of two disks like the one above. But I don't think RAID1 
causes any trouble, iostat shows majority of IO operations on /dev/sdc 
where cache resides.


Hardware is 2 x Xeon 3.6, 2GB RAM,  SCSI storage controller: LSI Logic 
53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI on Intel E7520.


Now in topic of COSS, what is better solution (/dev/sdc is 36 GB and 
/dev/sdd also will be 36 GB, same disk model):


1) /dev/sdc and /dev/sdc used in full with AUFS and 1GB file of COSS in 
memory

2) /dev/sdc with AUFS and /dev/sdd with COSS (wouldn't be 36 GB too much?)
3) /dev/sdc splitted for COSS and AUFS (if yes, what ratio of 
splitting), /dev/sdd with AUFS

4) /dev/sdc and /dev/sdd splitted for COSS and AUFS

I thought that (1) would be OK. What do you think? What is recommended 
size of COSS file?


--
Michał Margula, [EMAIL PROTECTED], http://alchemyx.uznam.net.pl/
W życiu piękne są tylko chwile [Ryszard Riedel]


Re: [squid-users] IOWAIT and second disk

2006-09-19 Thread Adrian Chadd
I'd suggest splitting it - one disk for AUFS (36 gig), one disk for
COSS. Don't use a filesystem w/ a 36 gig file on it. Just tell Squid
to use the raw disk device. It works really well. Just remember to
read the COSS setup example in the Wiki.

http://wiki.squid-cache.org/SquidFaq/CyclicObjectStorageSystem

As for the COSS parameters - I'm not certain. I'd start with the default
configuration which seems sensible and run with it for a while.




Adrian

On Tue, Sep 19, 2006, Micha?? Margula wrote:

 Now in topic of COSS, what is better solution (/dev/sdc is 36 GB and 
 /dev/sdd also will be 36 GB, same disk model):
 
 1) /dev/sdc and /dev/sdc used in full with AUFS and 1GB file of COSS in 
 memory
 2) /dev/sdc with AUFS and /dev/sdd with COSS (wouldn't be 36 GB too much?)
 3) /dev/sdc splitted for COSS and AUFS (if yes, what ratio of 
 splitting), /dev/sdd with AUFS
 4) /dev/sdc and /dev/sdd splitted for COSS and AUFS
 
 I thought that (1) would be OK. What do you think? What is recommended 
 size of COSS file?
 
 -- 
 Micha?? Margula, [EMAIL PROTECTED], http://alchemyx.uznam.net.pl/
 W ??yciu pi??kne s?? tylko chwile [Ryszard Riedel]


[squid-users] Reverse proxy HTTPS port on 8443

2006-09-19 Thread Mohamed Navas V

hi,

We have one setup with a reverse proxy for multiple backend back
servers. All these servers are for HTTPtraffic only with accel port
80.

But it's propsed one additional with the existing setup as follows:-


 request on port 8080 request on port 8080
user R.Proxy---Web
Server

  Replay on 8443replay on port 8443
user 
R.ProxyWeb
Server

ie User will request http://example.com:8080/abc but he want to get
HTTPS replay as https://example.com:8443/abc 

We are using squid 2.5, all other servers except this one are
listening on 80,443 ports only.

What changes to be done config file for the same ?

Br--
Navas


Re: [squid-users] IOWAIT and second disk

2006-09-19 Thread Matus UHLAR - fantomas
On 19.09.06 09:56, Michał Margula wrote:
 Hardware is 2 x Xeon 3.6, 2GB RAM,  SCSI storage controller: LSI Logic 
 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI on Intel E7520.
 
 Now in topic of COSS, what is better solution (/dev/sdc is 36 GB and 
 /dev/sdd also will be 36 GB, same disk model):

 1) /dev/sdc and /dev/sdc used in full with AUFS and 1GB file of COSS in 
 memory
 2) /dev/sdc with AUFS and /dev/sdd with COSS (wouldn't be 36 GB too much?)
 3) /dev/sdc splitted for COSS and AUFS (if yes, what ratio of 
 splitting), /dev/sdd with AUFS
 4) /dev/sdc and /dev/sdd splitted for COSS and AUFS
 
 I thought that (1) would be OK. What do you think? What is recommended 
 size of COSS file?

I would use the 4), decrease cache_disk for both disks and create ~1GB file
for COSS on both disks.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Chernobyl was an Windows 95 beta test site.


Re: [squid-users] Squid 2.6 + COSS comparison

2006-09-19 Thread Matus UHLAR - fantomas
On 19.09.06 10:11, Adrian Chadd wrote:
 The COSS code in Squid-2.6 has come quite far from its original design by
 Eric Stern. Steven Wilton has put an enormous amount of effort into the
 COSS design to fix the remaining bugs and dramatically improve its
 performance.
 
 I've assembled a quick webpage showing the drop in CPU usage and the
 negligible effect on hit-rate. Steven Wilton provided the statistics
 from two Squid caches he administers.
 
 You can find it here - http://www.squid-cache.org/~adrian/coss/.

Great. Did you play with max-size option to try to find out at which size
the efficiency of aufs and COSS nears?

 Steven is running a recent snapshot of squid-2.6. The latest -STABLE
 release of Squid-2.6 doesn't incorporate all of the COSS bugfixes
 (and there's at least one really nasty bug!) so if you're interested
 in trying COSS out please grab the latest Squid-2.6 snapshot from
 the website.

are you speaking about current release (STABLE3) or about stable squid in
general?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
LSD will make your ECS screen display 16.7 million colors


Re: [squid-users] Squid 2.6 + COSS comparison

2006-09-19 Thread Adrian Chadd
On Tue, Sep 19, 2006, Matus UHLAR - fantomas wrote:
 On 19.09.06 10:11, Adrian Chadd wrote:
  The COSS code in Squid-2.6 has come quite far from its original design by
  Eric Stern. Steven Wilton has put an enormous amount of effort into the
  COSS design to fix the remaining bugs and dramatically improve its
  performance.
  
  I've assembled a quick webpage showing the drop in CPU usage and the
  negligible effect on hit-rate. Steven Wilton provided the statistics
  from two Squid caches he administers.
  
  You can find it here - http://www.squid-cache.org/~adrian/coss/.
 
 Great. Did you play with max-size option to try to find out at which size
 the efficiency of aufs and COSS nears?

I haven't really administered large-scale production caches in a few
years but there's plenty of academic papers circa 1997 which show there's
a clear win doing your own IO vs using a unix FS for object sizes under
a couple hundred kilobytes. Modern UNIX FSes haven't gotten (much) better
in that regard.

COSS is still pretty new and there's not a lot of documentation on how
to tune it. But even the defaults smoke using a normal unix filesystem.
Just start at max-size at, say, 64k or 128k and go from there.
Don't bother going above a few hundred kilobytes.

  Steven is running a recent snapshot of squid-2.6. The latest -STABLE
  release of Squid-2.6 doesn't incorporate all of the COSS bugfixes
  (and there's at least one really nasty bug!) so if you're interested
  in trying COSS out please grab the latest Squid-2.6 snapshot from
  the website.
 
 are you speaking about current release (STABLE3) or about stable squid in
 general?

Steven has committed a bunch of bugfixes to squid-2.6 after Squid-2.6-STABLE3
was released. So no, the current snapshots are fine but stable3 isn't.



Adrian



[squid-users] parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available

2006-09-19 Thread Víctor J. Hernández Gómez
Hi,

we have a new transparent-proxy configuration built using the following
options:

./configure \
--prefix=/usr/local/squid-2.6.STABLE3 \
--disable-wccp \
--disable-wccp2 \
--enable-ssl \
--with-openssl=/usr/local/ssl \
--enable-default-err-language=Spanish \
--enable-err-languages=Spanish English \
--disable-ident-lookups \
--enable-async-io \
--enable-linux-netfilter

in a RedHat 4 EL update 2  kernel, (netfilter is supported).

We are getting messages like this:

2006/09/19 11:44:30| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST)
failed: (92) Protocol not available

...in our cache.log.

Any idea on what is going on?

Thank you in advanced for your help
--
Víctor






Re: [squid-users] IOWAIT and second disk

2006-09-19 Thread Michał Margula

Matus UHLAR - fantomas wrote:

I would use the 4), decrease cache_disk for both disks and create ~1GB file
for COSS on both disks.



Can you explain why? I am not saying you're wrong, but I want to 
understand :)


--
Michał Margula, [EMAIL PROTECTED], http://alchemyx.uznam.net.pl/
W życiu piękne są tylko chwile [Ryszard Riedel]


[squid-users] Reverse proxy for multiple backend servers

2006-09-19 Thread Mohamed Navas V

Hi,

The reverse proxy to redirect for 1 web backend server is working fine
for me ...
Still I am confused with some setting for multiple backend servers.

For single backend server setup http://proxy.my-domain.com is
redirecting to http://backendserver1.my-domain.com

For multiple backend server setup, we have to do as follows:-

http://proxy.mydomain.com -- http://backendserver1.my-domain.com
http://proxy.mydomain.com/folder1 -- http://backendserver2.my-domain.com
http://proxy.mydomain.com/folder2 --
http://backendserver3.my-domain.com etc ...

Here for the domain proxy.mydomain.com has public IP.

Or pls suggest me any other suitable alternate with config details ..

thanks,

Br--
Sam


Re: [squid-users] Reverse proxy HTTPS port on 8443

2006-09-19 Thread fulan Peng

I show you a workable configuration file for 2.6 S3. You can replace
those things.

http_port 127.0.0.1:80  defaultsite=ddint.org
https_port 443 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=zyzg.org.ru
https_port 9001 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=192.168.0.1
https_port 9003 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=www.peacehall.com
cache_peer www.peacehall.com parent 80  0 originserver name=peacehall

cache_peer 192.168.0.1 parent 5225  0 originserver name=futurechinaforum
cache_peer zyzg.org.ru parent 80  0 originserver name=zyzg
cache_peer ddint.org parent 80  0 originserver name=ddint
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log c:/squid/var/logs/access.log squid
debug_options ALL,9
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl www.peacehall.com dstdomain www.peacehall.com
acl 192.168.0.1 dstdomain 192.168.0.1
acl zyzg.org.ru dstdomain zyzg.org.ru
acl ddint.org dstdomain ddint.org
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  
acl Safe_ports port 21  
acl Safe_ports port 443 563 
acl Safe_ports port 70  
acl Safe_ports port 210 
acl Safe_ports port 1025-65535  
acl Safe_ports port 280 
acl Safe_ports port 488 
acl Safe_ports port 591 
acl Safe_ports port 777 
acl CONNECT method CONNECT
http_access allow zyzg.org.ru
http_access allow www.peacehall.com
http_access allow ddint.org
#http_access allow www.dajiyuan.com
http_access allow 192.168.0.1
http_access allow localhost
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_reply_access allow all
icp_access allow all
cache_peer_access zyzg   allow zyzg.org.ru
cache_peer_access peacehall  allow www.peacehall.com
cache_peer_access futurechinaforum   allow 192.168.0.1
#cache_peer_access dajiyuan  allow www.dajiyuan.com
cache_peer_access ddint  allow ddint.org
visible_hostname ddint.org
coredump_dir c:/squid/var/cache


On 9/19/06, Mohamed Navas V [EMAIL PROTECTED] wrote:

hi,

We have one setup with a reverse proxy for multiple backend back
servers. All these servers are for HTTPtraffic only with accel port
80.

But it's propsed one additional with the existing setup as follows:-


 request on port 8080 request on port 8080
user R.Proxy---Web
Server

  Replay on 8443replay on port 8443
user 
R.ProxyWeb
Server

ie User will request http://example.com:8080/abc but he want to get
HTTPS replay as https://example.com:8443/abc 

We are using squid 2.5, all other servers except this one are
listening on 80,443 ports only.

What changes to be done config file for the same ?

Br--
Navas



Re: [squid-users] caching geoserver

2006-09-19 Thread Peppo Herney
Hello,

thanks for your previous reply. Unfourtunatly I could not resolve the issue, 
after following your hints. Here is log output from access.log:
1158668471.343156 129.26.149.240 TCP_MISS/200 29792 GET 
http://129.26.151.234:8080/geoserver/wms?bbox=-130,24,-66,50styles=populationFormat=image/pngrequest=GetMaplayers=topp:stateswidth=550height=250srs=EPSG:4326
 - DIRECT/129.26.151.234 image/png [Host: 129.26.151.234:8080\r\nUser-Agent: 
Mozilla/5.0 (Windows; U; Windows NT 5.1; de-DE; rv:1.7.8) Gecko/20050511 
Firefox/1.0.4\r\nAccept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nAccept-Language:
 de-de,de;q=0.8,en-us;q=0.5,en;q=0.3\r\nAccept-Encoding: 
gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 
300\r\nProxy-Connection: keep-alive\r\nCookie: 
JSESSIONID=40q0nadg3udc7\r\nCache-Control: max-age=0\r\n] [HTTP/1.1 200 
OK\r\nContent-Type: image/png\r\nConnection: close\r\nServer: 
Jetty(6.0.x)\r\n\r]

and from store.log

1158668471.343 RELEASE -1  B338CE0076836C85FD31C10090F9E091  200
-1-1-1 image/png -1/29584 GET 
http://129.26.151.234:8080/geoserver/wms?bbox=-130,24,-66,50styles=populationFormat=image/pngrequest=GetMaplayers=topp:stateswidth=550height=250srs=EPSG:4326

I also put it into a cacheability checker and it said the object would be 
stale. Cache-Control: max-age=0 doesn't look too promising.
Is there an option to force caching on it?

kind regards

Peppo

old message:
tor 2006-07-27 klockan 18:14 +0200 skrev Peppo Herney:
 Hello,

 I hope someone can help me out on this:
 I have set up a geoserver http://docs.codehaus.org/display/GEOS/Home
 and I do not want it to generate the same map over and over again, because it 
 consumes a lot of processor time. Therefore I installed squid and everything 
 seems to work fine, except it does not cache anything. It is used in the 
 httpd accelerator mode.
 Here is some output:
 access.log:
 1154016642.734   1328 129.26.149.240 TCP_MISS/200 199354 GET 
 http://129.26.151.234:8080/geoserver/wms? - DIRECT/129.26.151.234 image/png

1. Enable log_query_terms to have the full URL logged and make sure it's
identical.

2. Make sure you have not blocked caching of query URLs. The suggested
default config does.. See the cache directive.

3. If still no luck, check with the cacheability check engine to make
sure the response isn't blocked from caching by the application.

4. If still in doubt, enable log_mime_hdrs and post a couple of entries
(two, max three) and we will try helping you finding the correct knobs
to make Squid cache the object.

Regards
Henrik
-- 
dies ist eine manuell generierte mail, sie
beinhaltet tippfehler und ist auch ohne
grossbuchstaben gueltig. 


[squid-users] squid limits and known constrains

2006-09-19 Thread Benner, Uwe


Dear all,

We want to setup a squid proxy env. for the headquarters and
subsidiaries. 
Before I start the project I should know the hardware requirements
(sizing) and if any limits or constrains regarding squid are known (the
OS will be linux)

Were can I find this information - the FQAs and wiki doesn't contain
helpful information.

Thanks for your support



Re: [squid-users] squid limits and known constrains

2006-09-19 Thread Adrian Chadd
On Tue, Sep 19, 2006, Benner, Uwe wrote:
 
 
 Dear all,
 
 We want to setup a squid proxy env. for the headquarters and
 subsidiaries. 
 Before I start the project I should know the hardware requirements
 (sizing) and if any limits or constrains regarding squid are known (the
 OS will be linux)

I'm going through the process of doing this for a few clients.
It really does wildly depend on how many clients there are per site,
how you're thinking of linking them up, and how much bandwidth is available.



adrian



[squid-users] Squid ACL (Is this Possible)

2006-09-19 Thread Mehmet, Levent \(Accenture\)
 All

I currently have a setup which sends different domains to different
Cache_peers. This has been working fine with the below config.:

cache_peer 1.1.1.1 parent 80 80 no-query
cache_peer 2.2.2.2 parent 80 80 no-query
cache_peer 3.3.3.3 parent 3128 3130 no-query

cache_peer_domain 3.3.3.3 parent  nww. .nhs.uk
cache_peer_domain 1.1.1.1 parent .gsi.gov.uk
cache_peer_domain 2.2.2.2 parent .gsi.gov.uk

acl NHS dstdomain  nww. .nhs.uk
acl GSI dstdomain .gsi.gov.uk

cache_peer_access 3.3.3.3 allow NHS
cache_peer_access 1.1.1.1 allow GSI

never_direct allow NHS
never_direct allow GSI


When trying to access http://nww.nhs.uk this goes via the correct path
of 3.3.3.3, but our clients now wish to access the following websites,
which cause a conflict: http://nww.nhsmessaging.co.uk/ Web sites like
this cause me a issue because of the .co.uk which tries to go direct and
nww tries to go via 3.3.3.3, also with
http://www.pasa.nhs.uk/cat_default.asp www. Go direct and the nhs.uk
tries to go via 3.3.3.3. This is a major show stopper for the company.
Is there a way around this as we need to send all nww down 3.3.3.3

Thanks
 

Levent Mehmet 
Network Analyst 
Server and Network Team 
[EMAIL PROTECTED] Operate Unit 
Market Towers, 20th Floor 
1 Nine Elms Lane 
London 
SW8 5NQ 

E-mail: [EMAIL PROTECTED] 
Phone: +44 20 7084 3517 
Fax:   +44 20 7084 2536 



This email and any files transmitted with it are confidential. If you are not 
the intended recipient, any reading, printing, storage, disclosure, copying or 
any other action taken in respect of this email is prohibited and may be 
unlawful. 

If you are not the intended recipient, please notify the sender immediately by 
using the reply function and then permanently delete what you have 
received.Incoming and outgoing email messages are routinely monitored for 
compliance with the Department of Healths policy on the use of electronic 
communications. 

For more information on the Department of Healths email policy, click 
http;//www.doh.gov.uk/emaildisclaimer.htm

The original of this email was scanned for viruses by Government Secure 
Intranet (GSi)  virus scanning service supplied exclusively by Cable  Wireless 
in partnership with MessageLabs.
On leaving the GSI this email was certified virus free.
The MessageLabs Anti Virus Service is the first managed service to achieve the 
CSIA Claims Tested Mark (CCTM Certificate Number 2006/04/0007), the UK 
Government quality mark initiative for information security products and 
services.  For more information about this please visit www.cctmark.gov.uk


AW: [squid-users] squid limits and known constrains

2006-09-19 Thread Benner, Uwe
Thx for your fast response.

I am thinking about a parent /sibling setup.
The range of users will be (located in different subs)
50
150 - 200
~ 900
~1500

Bandwidth of the internet link and the links between subs still unknown

Uwe

 Dear all,
 
 We want to setup a squid proxy env. for the headquarters and
 subsidiaries. 
 Before I start the project I should know the hardware requirements
 (sizing) and if any limits or constrains regarding squid are known
(the
 OS will be linux)

I'm going through the process of doing this for a few clients.
It really does wildly depend on how many clients there are per site,
how you're thinking of linking them up, and how much bandwidth is
available.



adrian





Re: [squid-users] squid limits and known constrains

2006-09-19 Thread Adrian Chadd
On Tue, Sep 19, 2006, Benner, Uwe wrote:
 Thx for your fast response.
 
 I am thinking about a parent /sibling setup.
 The range of users will be (located in different subs)
 50
 150 - 200
 ~ 900
 ~1500
 
 Bandwidth of the internet link and the links between subs still unknown

Anything reasonably recent running Squid 2.6 should handle at least 500
clients, maybe more. Maybe a thousand. The cyclic filesystem support
in squid-2.6 dramatically increases the number of clients Squid can
handle.

Squid-2.6 should be able to cache a few multiples of 10 megabits these
days, especially if you're using COSS.

But as I've said in a few other posts, I've been out of the loop running
proxy caches. I'm still getting back in to said loop. Hopefully others
can contribute more accurate and up-to-date figures.




Adrian



[squid-users] Force squid to timeout and internet explorer hanging

2006-09-19 Thread Irwan Hadi

We have a problem right now with our Squid 2.5 Stable 10 where if IE
tried to open some websites with many advertisements, eg: msnbc.com,
cnn.com, IE will hang.

The main problem is we just implement a intrusion firewall at our
border router that blocks advertisement websites. So, when Squid can't
open this advertisement link, and thus caused the client (IE) to hang.

Is it possible to set some kind of timeout, so that a URL that fails
to open should be returned as fail with squid within a couple seconds?

I've been testing with the:
persistent_request_timeout
request_timeout
forward_timeout
connect_timeout

and change them to 20 seconds, but the problem still persists.

Thanks


[squid-users] Squid Multiwan

2006-09-19 Thread sOngUs

Hello guys!!   I got a linux server with two WAN interfaces to two
different ISP's and im using SQUID to cache and filter some things to
the users... is there a way i can select wich route to squid take to
go to the internet? i want some hosts to go to the internet using a
different route than the default route, (im using policy routing to do
this) its working so war if i only snat them, but if i redirect the
traffic to port 80 so they use squid , squid goes trough the default
route... is there a way i can accomplish this? having hosts going
trough different connections and being cached with squid?

i hope make myself clear, any help would be very much aprreciated


[squid-users] changing cache_dir size

2006-09-19 Thread Lawrence Wang

hi, if i change the size of a cache_dir in squid.conf, do i have to
re-initialize the dir with squid -z?


[squid-users] Re: Squid 2.6 + COSS comparison

2006-09-19 Thread Joost de Heer
Adrian Chadd wrote:
 Hi everyone,

 The COSS code in Squid-2.6 has come quite far from its original design by
 Eric Stern. Steven Wilton has put an enormous amount of effort into the
 COSS design to fix the remaining bugs and dramatically improve its
 performance.

 I've assembled a quick webpage showing the drop in CPU usage and the
 negligible effect on hit-rate. Steven Wilton provided the statistics
 from two Squid caches he administers.

 You can find it here - http://www.squid-cache.org/~adrian/coss/.
 Steven is running a recent snapshot of squid-2.6. The latest -STABLE
 release of Squid-2.6 doesn't incorporate all of the COSS bugfixes
 (and there's at least one really nasty bug!) so if you're interested
 in trying COSS out please grab the latest Squid-2.6 snapshot from
 the website.

The example proxy given has a request rate of about 100 req/s max, if I
understand the graphs correctly. How does COSS hold when the request rate
is significantly higher? I run a proxy that currently seems to peak around
420 req/s (and has an average rate of about 300 req/s during office
hours), and am currently using aufs. Mbps peakrate is about 25/30 Mbps.
Anything that can improve the proxy performance even more is wanted, since
I have the feeling that currently the proxy is hitting its upper limits.

Joost



Re: [squid-users] Squid ACL (Is this Possible)

2006-09-19 Thread Chris Robertson

Mehmet, Levent (Accenture) wrote:

 All

I currently have a setup which sends different domains to different
Cache_peers. This has been working fine with the below config.:

cache_peer 1.1.1.1 parent 80 80 no-query
cache_peer 2.2.2.2 parent 80 80 no-query
cache_peer 3.3.3.3 parent 3128 3130 no-query

cache_peer_domain 3.3.3.3 parent  nww. .nhs.uk
  
Hmmm...  I don't think that text followed by a dot is valid syntax for 
cache_peer_domain or dstdomain.  I'd advise making a dstdom_regex acl 
and using cache_peer_access for this peer.  Something like...


acl NWW dstdom_regex \.?nww\.
acl NHS dstdomain .nhs.uk
cache_peer_access 3.3.3.3 allow NHS
cache_peer_access 3.3.3.3 allow NWW
never_direct allow NWW

...in addition to the other rules you have listed.

cache_peer_domain 1.1.1.1 parent .gsi.gov.uk
cache_peer_domain 2.2.2.2 parent .gsi.gov.uk

acl NHS dstdomain  nww. .nhs.uk
  

Obviously, this ACL should be adjusted as shown above.

acl GSI dstdomain .gsi.gov.uk

cache_peer_access 3.3.3.3 allow NHS
cache_peer_access 1.1.1.1 allow GSI

never_direct allow NHS
never_direct allow GSI


When trying to access http://nww.nhs.uk this goes via the correct path
of 3.3.3.3, but our clients now wish to access the following websites,
which cause a conflict: http://nww.nhsmessaging.co.uk/ Web sites like
this cause me a issue because of the .co.uk which tries to go direct and
nww tries to go via 3.3.3.3, also with
http://www.pasa.nhs.uk/cat_default.asp www. Go direct and the nhs.uk
tries to go via 3.3.3.3. This is a major show stopper for the company.
Is there a way around this as we need to send all nww down 3.3.3.3

Thanks
 

Levent Mehmet 
Network Analyst 
Server and Network Team 
[EMAIL PROTECTED] Operate Unit 
Market Towers, 20th Floor 
1 Nine Elms Lane 
London 
SW8 5NQ 

E-mail: [EMAIL PROTECTED] 
Phone: +44 20 7084 3517 
Fax:   +44 20 7084 2536 
  

Chris


Re: [squid-users] Persistent Connections

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 10:02 -0700 skrev Mark Nottingham:

 1) Squid supports HTTP/1.0-style persistent connections; i.e., if it  
 gets a request with a Connection: keep-alive header in it, it will  
 reuse the connection.

Yes.

 However, if it receives a HTTP/1.1 request, it will fall back to one- 
 request-per-connection. Since pconns are the default in HTTP 1.1, why  
 not use them?

Because we don't know the HTTP/1.1 client knows HTTP/1.0-style
persistent connections.

 2) Squid still sends Proxy-Connection headers. As far as I can see,  
 they're not required by any modern implementations; everybody  
 understands Connection. Maybe it's time to stop generating them?

You mean sending Connection: keep-alive instead of Proxy-Connection:
keep-alive? It's a bit of a grey zone as neither is defined in any
standard..

 3) Squid can't persist client-side connections if it doesn't have a  
 Content-Length handy, so if the origin server doesn't provide one,  
 it'll close. However, responses cached by Squid -- by their very  
 nature -- have a C-L available to Squid, even if the origin server  
 doesn't send one.  Since content generated by scripts often doesn't  
 have C-L set, but can sometimes be cacheable, it would be a nice  
 optimisation to synthesise the response body length if you don't have  
 a C-L on a cached response. Has anyone attempted this?

It's possible (and not even difficult). Think we even did so at some
point in time. I think the main reason why it isn't done is because of
HTTP/1.0 signaling using close to mark the end of the object we are
not really sure the object isn't truncated as most software errors and
several network errors is signaled in the same manner..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] [squid-users]: WARNING: Disk space over limit: 31457564 KB 31457280 KB. How to avoid it?

2006-09-19 Thread Henrik Nordstrom
mån 2006-09-18 klockan 16:40 -0700 skrev Pranav Desai:

 Could any of the above have any impact on the performance.

Well.. it's a sign that your cache_dir is probably overloaded, or that
you have set cache_swap_high too high making it overly hard for the LRU
garbage collection to keep up.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 11:53 +0200 skrev Víctor J. Hernández Gómez:

 2006/09/19 11:44:30| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST)
 failed: (92) Protocol not available
 
 ...in our cache.log.
 
 Any idea on what is going on?

Do you have the NAT iptable loaded?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Reverse proxy for multiple backend servers

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 15:18 +0400 skrev Mohamed Navas V:
 Hi,
 
 The reverse proxy to redirect for 1 web backend server is working fine
 for me ...
 Still I am confused with some setting for multiple backend servers.
 
 For single backend server setup http://proxy.my-domain.com is
 redirecting to http://backendserver1.my-domain.com
 
 For multiple backend server setup, we have to do as follows:-
 
 http://proxy.mydomain.com -- http://backendserver1.my-domain.com

ok, just cache_peer + cache_peer_domain/access..

 http://proxy.mydomain.com/folder1 -- http://backendserver2.my-domain.com
 http://proxy.mydomain.com/folder2 --
 http://backendserver3.my-domain.com etc ...

These is a little trickier if you really want to rewrite the url-path
after the host. If you can rearrange the backend servers to each have
their content in unique directories then things gets a whole lot simpler
and much less error prone. In such case it's just a matter of cache_peer
+ cache_peer_access to select which backend to use.

If you really need to rewrite the url-path then use a redirector to
rewrite the requested URL.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] caching geoserver

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 15:28 +0200 skrev Peppo Herney:

 1158668471.343156 129.26.149.240 TCP_MISS/200 29792 GET 
 http://129.26.151.234:8080/geoserver/wms?bbox=-130,24,-66,50styles=populationFormat=image/pngrequest=GetMaplayers=topp:stateswidth=550height=250srs=EPSG:4326
  - DIRECT/129.26.151.234 image/png

Request:

 [Host: 129.26.151.234:8080\r\n
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de-DE; rv:1.7.8) 
 Gecko/20050511 Firefox/1.0.4\r\n
 Accept: 
 text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\n
 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3\r\n
 Accept-Encoding: gzip,deflate\r\n
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
 Keep-Alive: 300\r\n
 Proxy-Connection: keep-alive\r\n
 Cookie: JSESSIONID=40q0nadg3udc7\r\n
 Cache-Control: max-age=0\r\n]

Response:

 [HTTP/1.1 200 OK\r\n
 Content-Type: image/png\r\n
 Connection: close\r\n
 Server: Jetty(6.0.x)\r\n\r]

 I also put it into a cacheability checker and it said the object would be 
 stale. Cache-Control: max-age=0 doesn't look too promising.
 Is there an option to force caching on it?


There is nothing in the response which prevents it from being cached,
but nothing which makes it cached either. Typical dynamic response with
no regard to caching. You can add expiry information to this response
with the min parameter in refresh_pattern if needed.


However, the request explicitly says the response must be fresh (or at
least verified fresh). You can override this with ignore-reload.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Force squid to timeout and internet explorer hanging

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 10:20 -0600 skrev Irwan Hadi:

 The main problem is we just implement a intrusion firewall at our
 border router that blocks advertisement websites. So, when Squid can't
 open this advertisement link, and thus caused the client (IE) to hang.

Hmm... you should reconfigure your firewall to shut down such
offending connections with TCP RST, not silently drop them.

 Is it possible to set some kind of timeout, so that a URL that fails
 to open should be returned as fail with squid within a couple seconds?

It's automatic providing your firewall reacts properly.

Detecting blackholed connections is harder as it's the exact same as a
server taking a long time to respond. You should not need to do this,
but if you absolutely want to look into the read_timeout.

Depending on how these sites is blocked you may also have success
looking into connect_timeout.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid Multiwan

2006-09-19 Thread Henrik Nordstrom
tis 2006-09-19 klockan 10:52 -0600 skrev sOngUs:
 Hello guys!!   I got a linux server with two WAN interfaces to two
 different ISP's and im using SQUID to cache and filter some things to
 the users... is there a way i can select wich route to squid take to
 go to the internet?

Squid doesn't do routing (that's a property of your TCP/IP kernel in the
OS). But it can select different source addresses depending on the
request which fits nicely in policy routing. See tcp_outgoing_address.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid Multiwan

2006-09-19 Thread sOngUs

Ok thanks!

Im doing some testing using this:
http://devel.squid-cache.org/tosaddracl/example.html
But it doesnt work, it gives me timeouts!
I got many questions... like...
tcp_outgoing_address 192.168.0.1 someacl ( src acl)
What does this rule do? im guessing:

opens the connection from hosts listed on someacl to any website like if it was
coming from 192.168.0.1?
well i added that ip address to one of the interfaces and in http_port too

What am i doing wrong?



On 9/19/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:

tis 2006-09-19 klockan 10:52 -0600 skrev sOngUs:
 Hello guys!!   I got a linux server with two WAN interfaces to two
 different ISP's and im using SQUID to cache and filter some things to
 the users... is there a way i can select wich route to squid take to
 go to the internet?

Squid doesn't do routing (that's a property of your TCP/IP kernel in the
OS). But it can select different source addresses depending on the
request which fits nicely in policy routing. See tcp_outgoing_address.

Regards
Henrik





[squid-users] Re: redirect access to wpad.dat

2006-09-19 Thread dny

On 9/19/06, dny [EMAIL PROTECTED] wrote:

how can i redirect access to wpad.dat to my own version of this file
using squid?

tia
dny



can i put this in squid.conf ??

acl wpad url_regex -i \wpad.dat$
http_access deny wpad
deny_info myownwpad.dat

any suggestion?

tia
dny



--- http://bloglines.com/public/bacaan --- berita terkini -
bencana/gempa/tsunami/flue/etc

... they look but do not see and hear but do not listen or understand. Mat 13:13
... but that which cometh out of the mouth, this defileth a man.   Mat 15:11



[squid-users] squid error running out of filedescriptors and others

2006-09-19 Thread dny

hi,

just check my cache log and found these error.

what are these? and how to fix it?

tnxrgds,
dny


2006/09/19 13:37:43| storeAufsOpenDone: (2) No such file or directory
2006/09/19 13:37:43|/var/log/cache/02/13/000213E4
2006/09/19 13:41:10| storeAufsOpenDone: (2) No such file or directory
2006/09/19 13:41:10|/var/log/cache/00/23/238D
2006/09/19 14:36:08| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:36:24| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:36:40| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:36:56| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:37:12| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:37:28| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:37:43| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST)
failed: (2) No such file or directory


--
Chat with us online! Jabber, Google Talk, Yahoo!, AIM, ICQ, MSN
http://www.polarisnetwork.com/portal/messenger

http://mypolaris.com/start/PortableFirefoxFullyLoaded.zip
Warning: this Firefox is extremely secured! for geek only!

--- http://bloglines.com/public/bacaan --- berita terkini -
bencana/gempa/tsunami/flue/etc

... they look but do not see and hear but do not listen or understand. Mat 13:13
... but that which cometh out of the mouth, this defileth a man.   Mat 15:11


Re: [squid-users] Reverse proxy HTTPS port on 8443

2006-09-19 Thread Mohamed Navas
OK, here the problem is SSL certificate is keeping in the destination 
backend server and this is initializing the SSL transaction. The 
proxy server itself has some SSL cert/key for other servers for any 
HTTPS request from the clients ...


I had tried redirection from http://example.com:8080/abc to 
https://example.com:8443/abc in apache just for testing, but getting 
some dots only in the browser ..!!


thanks,

Br-
Navas

 At 04:41 PM 9/19/2006, fulan Peng wrote:

I show you a workable configuration file for 2.6 S3. You can replace
those things.

http_port 127.0.0.1:80  defaultsite=ddint.org
https_port 443 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=zyzg.org.ru
https_port 9001 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=192.168.0.1
https_port 9003 cert=c:\squid\etc\cert.pem key=c:\squid\etc\key.pem
defaultsite=www.peacehall.com
cache_peer www.peacehall.com parent 80  0 originserver name=peacehall

cache_peer 192.168.0.1 parent 5225  0 originserver name=futurechinaforum
cache_peer zyzg.org.ru parent 80  0 originserver name=zyzg
cache_peer ddint.org parent 80  0 originserver name=ddint
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log c:/squid/var/logs/access.log squid
debug_options ALL,9
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl www.peacehall.com dstdomain www.peacehall.com
acl 192.168.0.1 dstdomain 192.168.0.1
acl zyzg.org.ru dstdomain zyzg.org.ru
acl ddint.org dstdomain ddint.org
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT
http_access allow zyzg.org.ru
http_access allow www.peacehall.com
http_access allow ddint.org
#http_access allow www.dajiyuan.com
http_access allow 192.168.0.1
http_access allow localhost
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_reply_access allow all
icp_access allow all
cache_peer_access zyzg  allow zyzg.org.ru
cache_peer_access peacehall allow www.peacehall.com
cache_peer_access futurechinaforum  allow 192.168.0.1
#cache_peer_access dajiyuan allow www.dajiyuan.com
cache_peer_access ddint allow ddint.org
visible_hostname ddint.org
coredump_dir c:/squid/var/cache


On 9/19/06, Mohamed Navas V [EMAIL PROTECTED] wrote:

hi,

We have one setup with a reverse proxy for multiple backend back
servers. All these servers are for HTTPtraffic only with accel port
80.

But it's propsed one additional with the existing setup as follows:-


 request on port 8080 request 
on port 8080
user 
R.Proxy---Web

Server

  Replay on 8443replay on port 8443
user 
R.ProxyWeb

Server

ie User will request http://example.com:8080/abc but he want to get
HTTPS replay as https://example.com:8443/abc 

We are using squid 2.5, all other servers except this one are
listening on 80,443 ports only.

What changes to be done config file for the same ?

Br--
Navas