[squid-users] large chunked object problem

2009-09-07 Thread Itzcak Pechtalt
Hi,

I have some problem with Squid 2.7.
A very large object is saved to disk even it exceeded
maximum_object_size which is 4MB.

It seems that chunked objects are saved to disk without any limit.

Is it right ? and is there a way to resolve it?

Thanks

Itzcak


Re: [squid-users] COSS max-stripe-waste parameter not implemented

2009-03-30 Thread Itzcak Pechtalt
I'm talking about Squid 2.7

Itzcak

On Mon, Mar 30, 2009 at 5:04 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Hi,

 It seems that max-stripe-waste parameter of COSS file system isn't
 implemented
 And actually it's equale to max-stripe-waste=1M
  cs-sizerange_min isn't used and storeCossAllocate function doesn't
 put a limit.

 Actually, it leads to COSS disk space waste, when an object larger
 then free space comes
 current stripe will be writen as is even if there is large unused part.

 Is there any reason why this isn't implemented ? or it should be
 consider as a bug

 Thanks

 Itzcak


 What version of Squid are you talking about?

 Amos




[squid-users] COSS max-stripe-waste parameter not implemented

2009-03-29 Thread Itzcak Pechtalt
Hi,

It seems that max-stripe-waste parameter of COSS file system isn't implemented
And actually it's equale to max-stripe-waste=1M
 cs-sizerange_min isn't used and storeCossAllocate function doesn't
put a limit.

Actually, it leads to COSS disk space waste, when an object larger
then free space comes
current stripe will be writen as is even if there is large unused part.

Is there any reason why this isn't implemented ? or it should be
consider as a bug

Thanks

Itzcak


Re: [squid-users] dynamic content caching

2009-03-29 Thread Itzcak Pechtalt
Do you have Date header?  if yes it should be cached

On Sun, Mar 29, 2009 at 5:11 PM, maoz mao...@gmail.com wrote:
 Hi,

 I use squid as a revers proxy and I'd like to cache dynamic pages.
 The origin web server (apache) provide Cache-Control: max-age header
 without last-modified header (make sense since dynamic pages been
 generated when requested).

 the only way that I manage to cache the page was when I add
 last-modified header.

 Is it possible to cache dynamic page without last-modified header?











[squid-users] Squid store open errors

2009-01-06 Thread Itzcak Pechtalt
Hi,

I got several errors in cache log like the following:
storeAufsOpenDone: (1) Operation not permitted /var/spool/squid/12/28/0012287A
Sometimes I get the same but with File not found error.

I suppose it's related to non clean sytem restart which omitted some
object from swap.state and from Squid.

Is there any way to fix this problems after non clean restart ?

Thanks

Itzcak


Re: [squid-users] outside squid link overload.

2008-12-02 Thread Itzcak Pechtalt
Squid doesn't have such feature, data from web is read without
correlation to client BW status. It's an interesting feature but it
doesn't exist.
delay_pools are only toward clients and will not help for your problem.

Itzcak

On Fri, Nov 28, 2008 at 12:52 PM, Alexey Kovrizhnykh
[EMAIL PROTECTED] wrote:
 Hello Squid users.
  Long time had been spent looking for the solution. Still i have
 nothing.
  We are mobile operator. I have many GPRS subscribers that are NATed
 by cisco to one public IP address prior to go to squid-2.5.STABLE6-3.4E.5. 
 The problem is that GRPS users start 2-3 downloads and Squid loads it very 
 fast and put it to cache but user can load the objects from Squid only at 
 30-180kbit/s. So when I divert all GPRS users thru Squid, my internet link 
 becomes stuck because Squid loads many files at all available bandwidth and 
 then give it to customers at GPRS speed.
  Restriction by delay pool doesn't suit - users have different speeds (up to 
 230kbit/s), so if I restrict each to 230kbit/s - it won't help because 40kbit 
 user will cause Squid to download at 230kbit/s. If I restrict to 40kbit/s, 
 then 230kbit user will complain that speed is very low.

  Question: How to make Squid not to download a new part of file unless the 
 old part is already loaded by the user?
  Tried cisco WAE612 as Content Engine - the same issue.

 Here is a picture of Squid interface:


 Sometimes traffic is equal. But some moments Squid loads from Internet (blue) 
 more then gives to network (green). So network cannot timely consume the 
 loaded traffic.
 Please help.
 ---
 Alexey Kovrizhnykh





[squid-users] TCP connections keep alive problem after 302 HTTP response from web

2008-11-30 Thread Itzcak Pechtalt
Hi

I found some inefficiency in Squid TCP connection handling toward servers .
In some cases Squid closes TCP connection to servers immediately after
304 not modified response and doesn't save it for reuse.
There is no visbible reason why Squid closes the connection. Squid
Sends Connection: Keep-Alive in HTTP request and the web server
returns Connection: Keep-Alive on the response, Also pconn_timeout
is configured to 1 minute.

After digging into the problem,  I found that the the problem occurs
only in cases the object type is PRIVATE. It seems like when
client_side code hadnles 304 not modified reply it calls
store_unregister which closes store entry and TCP connection in turn.

To reproduce it do the following
1) Browse www.cnn.com
2) Delete browser cache.
3) Browse again. The case will occur here.

Does someone know about it ?

Itzcak

Following short Wireshark sniff with 1 sample, 10.50.0.100 is Squid
IP. See FIN packet from Squid.

0.00  10.50.0.100 - 205.128.90.126 TCP 4006  http [SYN] Seq=0
Len=0 MSS=1460 WS=2
0.085216 205.128.90.126 - 10.50.0.100  TCP http  4006 [SYN, ACK]
Seq=0 Ack=1 Win=5840 Len=0 MSS=1460 WS=7
0.085226  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=1
Ack=1 Win=5840 Len=0
0.085230  10.50.0.100 - 205.128.90.126 HTTP GET
/cnn/.element/css/2.0/common.css HTTP/1.0
GET /cnn/.element/css/2.0/common.css HTTP/1.0
If-Modified-Since: Tue, 16 Sep 2008 14:48:32 GMT
Accept: */*
Referer: http://www.cnn.com/
Accept-Language: en-us
UA-CPU: x86
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET
CLR 2.0.50727; .NET CLR 3.0.04506.30)
Host: i.cdn.turner.com
Cache-Control: max-age=259200
Connection: keep-alive

0.172250 205.128.90.126 - 10.50.0.100  TCP http  4006 [ACK] Seq=1
Ack=366 Win=6912 Len=0
0.172934 205.128.90.126 - 10.50.0.100  HTTP HTTP/1.1 304 Not Modified
HTTP/1.1 304 Not Modified
Date: Wed, 26 Nov 2008 12:33:33 GMT
Expires: Wed, 26 Nov 2008 13:03:51 GMT
Last-Modified: Tue, 16 Sep 2008 14:48:32 GMT
Cache-Control: max-age=3600
Connection: keep-alive

0.173145  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=366
Ack=206 Win=6912 Len=0
0.173238  10.50.0.100 - 205.128.90.126 TCP 4006  http [FIN, ACK]
Seq=366 Ack=206 Win=6912 Len=0
0.259520 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
Seq=206 Ack=367 Win=6912 Len=0
0.259906  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=367
Ack=207 Win=6912 Len=0
0.565702 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
Seq=206 Ack=367 Win=6912 Len=0
0.565842  10.50.0.100 - 205.128.90.126 TCP [TCP Dup ACK 10#1] 4006 
http [ACK] Seq=367 Ack=207 Win=6912 Len=0


Re: [squid-users] Strange RST packet

2008-11-13 Thread Itzcak Pechtalt
What is the situation?

Do other clients work OK ?

Did you check if Squid crash occur (check in cache.log) ?

I there a specific scenario leading to RESET ?

Squid doesn't send RESET without reason, check for reason in capture

Itzcak

On Tue, Nov 11, 2008 at 8:50 PM, Luis Daniel Lucio Quiroz
[EMAIL PROTECTED] wrote:
 After debuggin,

 I've found that squid is sending a RST packet to a Windows station (WinXP SP2
 or WinVista).

 Squid is not configured to send RST's.  Is there any explication for this?

 Regards,

 LD



Re: [squid-users] squid3 keeps many idle connections

2008-10-22 Thread Itzcak Pechtalt
Hi,
If you use tranparent cache, you will have several connections open
per client IP.

Itzcak

On Wed, Oct 22, 2008 at 11:31 AM, Malte Schröder [EMAIL PROTECTED] wrote:
 Hello,
 Squid3 seems to keep a LOT (over a thousand) idle connections to its
 parent proxy. To me it seems as if doesn't properly reuse existing
 connections. Is there a way to find out what's going on? From what I
 can see there are not more than about two dozens requests at the
 same time. I already reduced pconn_timeout to 10 seconds to reduce the
 number of connections that are open to around 100.

 Kind regards
 Malte



Re: [squid-users] Cache_dir more than 10GB

2008-10-12 Thread Itzcak Pechtalt
Hi,

I reviewed Squid filemap code and it's clear that in some cases
a large cache will have high CPU load.
filemap.c function file_map_create function starts with 2^13 elements
and expands element number only after the list is full.
So for example if cached objects are always slightly below 2^23 and bitmap
size is 2^23  it will take a lot of CPU to find next free bit.

Itzcak

On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On sön, 2008-10-05 at 16:38 +0200, Itzcak Pechtalt wrote:
 When Squid reach several millions of objects per cache dir, it start
 to be very CPU consumer, becuae every insertion and deletion of object
 takes long time.

 Mine don't.

 On my Squid 80-100GB had the CPU consumption effect.

 That's a fairly small cache.

 The biggest cache I have been running was in the 1.5TB range, split over
 a number of cache_dir, about 130GB each I think.

 But it is important you keep the number of objects per cache_dir well
 below 2^24. Preferably not more than 2^23.


 What I think is that you got bitten by something else than cache size..

 Regards
 Henrik



Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Itzcak Pechtalt
On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:

 But it is important you keep the number of objects per cache_dir well
 below 2^24. Preferably not more than 2^23.

Is there any way to limit number of objects in cache_dir ?

Thanks

Itzcak


Re: [squid-users] Unexpected MISSes; patching Accept-Encoding via header_access/header_replace?

2008-10-06 Thread Itzcak Pechtalt
On Mon, Sep 29, 2008 at 4:19 AM, Gordon Mohr [EMAIL PROTECTED] wrote:
 Using 2.6.14-1ubuntu2 in an reverse/accelerator setup.

 URLs I hope to be cached aren't, even after adjusting passed headers.

 For example, I request an URL with FireFox, get the expected MISS. Then
 request same URL with IE, get unexpected MISS when I'd like a HIT. Then
 request same URL with Chrome, get MISS instead of HIT. Finally, request with
 Safari, finally get a HIT.

 I gather that the key variable is the differing Accept-Encoding headers:

 Firefox: gzip,deflate
 IE: gzip, deflate
 Chrome: gzip,deflate,bzip2
 Safari: gzip, deflate (same as IE, hence the HIT)

 My theory was that stripping the varied header values and replacing them
 with the lowest-common-denominator (and the only variant ever returned by
 the parent server) could help. So I added the following to my squid
 configuration:

 header_access Accept-Encoding deny all
 header_replace Accept-Encoding gzip

 However, this has not changed the HIT/MISS pattern at all.

 Any other ideas for letting all these browsers share the same cached
 version?

If the HTTP response includes Vary: User-Agent then Squid will give
HIT only for same
User-Agent, check your case.


 (Bonus question: My inner-server's 404 responses include a 24-hour Expires
 header. Will these be cached by squid for the declared period or the shorter
 negative-ttl? The info at
 http://wiki.squid-cache.org/SquidFaq/InnerWorkings#head-aed2acb07aed79ef1f7a590447b6a45a8dd8e7d1
 is unclear which wins.)

 - Gordon @ IA



Error caching  is called negative cache, and depends on negative_ttl
parameter
in squid.conf which default to 5 minutes, so the 404 will be cached
only 5 minutes by default.

Itzcak


Re: [squid-users] Cache_dir more than 10GB

2008-10-05 Thread Itzcak Pechtalt
When Squid reach several millions of objects per cache dir, it start
to be very CPU consumer, becuae every insertion and deletion of object
takes long time.
On my Squid 80-100GB had the CPU consumption effect.

Itzcak

On Tue, Sep 30, 2008 at 11:01 AM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Rafael Gomes wrote:

 On Tue, Sep 30, 2008 at 12:36 AM, Amos Jeffries [EMAIL PROTECTED]
 wrote:

 Is it true that there are problems with Cache_dir more than 10GB?

 No. I have larger caches here. Some others have caches in the TB range.

 Only cache_dir coss specifically are known to have maximum size issues
 due to the format design. And not handle large files.

 There are some related issues known;

  You might need Squid built with --enable-large-files to get a 64-bit
 build if you intend to pass entire DVDs through Squid.

 So, if this options are ok in my binare is ok to handle large files?

  Squid-2 has issues with handling of very large individual files being
 somewhat slow.


 Many people talk about it, but I dont found any information in Squid
 website. May be I didnt looking for right!

 So, it is true, will be a big problem, because with big hd, more than
 100GB, all to make cache. We will have problem with speed of write and
 read in one HD.

 AUFS on Linux, or DiskD on *BSD should have no problem with that size.
 Just make sure there is enough RAM in use for a mem-cache and the file
 indexes.

 Why AUFS on Linux and DiskD on *BSD? What is the diference in those
 System Operations?

 Something we still need to track down about the OS implementation and Squid
 usage of AsyncIO threads makes it work on Linux much faster than BSD. Next
 best speed-wise is DiskD, so thats still recommended for *BSD.

 Amos
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9



[squid-users] Out of memory problem in huge cache

2008-09-24 Thread Itzcak Pechtalt
Hi,

I have Squid 2.6 running on RedHat Linux with 8 GB memory, and
configured according to Squid wiki recomendations in
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

However, it crashs every couple of hours with FATAL: xmalloc: Unable
to allocate xxx bytes! message
I wonder if I made something bad in configuration or it's a memory leak problem.

Any help will be appreciate

Following configuration and error details:

Squid version 2.6 STABLE 12
cache_dir size 100GB
cache_mem size 1.995 GB
memory_pools_limit 100 MB

So 2 GB for cacheMem and 100 * 10MB(per 1G disk in above wiki) is 3GB.
In addition there are additional 5 MB
for other system use,  it should be enogh.

My Linux kernel support 4G memory space per process(tested), so the
problem is not in process memory limit.

I found in bugzilla a memory leak bug related to http_accept deny acl
rule, but my cache is configured with http_access alow all (it's a
private network and there are no security issues).

Some prints from cache.log
2008/09/21 22:40:31| Swap maxSize 10240 KB, estimated 7876923 objects
2008/09/21 22:40:31| Target number of buckets: 393846
2008/09/21 22:40:31| Using 524288 Store buckets
2008/09/21 22:40:31| Max Mem  size: 1997824 KB
2008/09/21 22:40:31| Max Swap size: 10240 KB
2008/09/21 22:43:12| Finished rebuilding storage from disk.
2008/09/21 22:43:12|   14085510 Entries scanned
2008/09/21 22:43:12|   14084558 Objects loaded.
2008/09/21 22:43:12|   201 Duplicate URLs purged.
2008/09/21 22:43:12|   751 Swapfile clashes avoided.
2008/09/21 22:43:12|   Took 161.5 seconds (87200.4 objects/sec).
2008/09/21 22:43:17|   Completed Validation Procedure
2008/09/21 22:43:17|   Validated 14084555 Entries
2008/09/21 22:43:17|   store_swap_size = 100657756k

FATAL: xmalloc: Unable to allocate 65535 bytes!

Squid Cache (Version 2.6.STABLE12-20070429): Terminated abnormally.
CPU Usage: 5939.813 seconds = 2462.689 user + 3477.124 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 1
Memory usage for squid via mallinfo():
total space in arena:  -117092 KB
Ordinary blocks:   -129142 KB 286436 blks
Small blocks:   0 KB  0 blks
Holding blocks: 83308 KB 11 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   12049 KB
Total in use:  -45834 KB 136%
Total free: 12049 KB -35%

thanks

Itzcak


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread Itzcak Pechtalt
On Wed, Sep 24, 2008 at 1:39 PM, BUI18 [EMAIL PROTECTED] wrote:
 Hi -

 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the video file has not changed.  Any help would be greatly
 appreciated.  Thanks.





Hi,
Check if the video URL changes from request to request. In YouTube
video even if the main URL is the same, there is request ID in URL who
changes per request.

Itzcak


Re: [squid-users] Out of memory problem in huge cache

2008-09-24 Thread Itzcak Pechtalt
My Squid isn't build with 64-bit support.
My RedHat Linux support up to 4GB memory per process.
I tested it on a short test program which tries to allocate memory
until he fails
and only after 4GB allocation I fail to continue allocation.

thanks

Itzcak

On Wed, Sep 24, 2008 at 3:56 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Itzcak Pechtalt wrote:

 Hi,

 I have Squid 2.6 running on RedHat Linux with 8 GB memory, and
 configured according to Squid wiki recomendations in

 http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

 However, it crashs every couple of hours with FATAL: xmalloc: Unable
 to allocate xxx bytes! message
 I wonder if I made something bad in configuration or it's a memory leak
 problem.


 Is your squid built with 64-bit support and --with-large-files to support
2GB total memory allocation?

 Amos
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9



[squid-users] Squid 2.6 HIT with Age header value higher then Cache-Control max-age value

2008-09-02 Thread Itzcak Pechtalt
Hi,

I'm using SQUID 2.6 STABLE 14
and get following strange combination in HTTP response:

X-Cache: HIT from ...\r\n
Age: 1102479\r\n
Cache-Control: max-age=345600\r\n

Is there a valid explanation for this ?
As I know from HTTP RFC an object is consider stail when Age is
greater then max-age.

Thanks

Itzcak


[squid-users] Could I limit max cache time ?

2008-05-25 Thread Itzcak Pechtalt
Hi all

1)
My cache store has very much unused objects, which come with long expiry time.
Is there an way to force delete of all objects after certain time ?

Itzcak


Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-30 Thread Itzcak Pechtalt

cache_dir diskd /cache/cache1 1000 2 256 Q1=64 Q2=72

Your configuration for L1 in cache_dir is 2, it seems to be too low.
By default it's 64, Henrik wrote in another mail here the following:


- L1 value:
  cache_dir/416  (old post from Henrik)
  cache_dir/500  (other post read here)
  cache_dir/256 * 0.6 (a more recent post from Henrik)
  Opinions?

They are all about the same.


So it will be a good idea to change it to at least 256 (in all your
cache_dir lines).

Itzcak


Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-28 Thread Itzcak Pechtalt

There is some problem,  Squid carries much more with low CPU.

What is your OS ? Maybe it doesn't support epoll ?

Itzcak

On 1/28/07, Robert [EMAIL PROTECTED] wrote:

Hello
i have problem with high cpu usage and browsing latency (now browsing
sites without squid if faster than with squid enabled),

i have 3 SCSI disk in configuration:
cache_dir diskd /cache/cache1 1000 2 256 Q1=64 Q2=72
cache_dir diskd /cache/cache2 1000 2 256 Q1=64 Q2=72
cache_dir diskd /cache/cache3 1000 2 256 Q1=64 Q2=72
and mounted:
/dev/sdd1 on /cache/cache1 type reiserfs
(rw,noexec,nosuid,noatime,nodiratime,notail,nolog,block-allocator=noborder)
/dev/sde1 on /cache/cache2 type reiserfs
(rw,noexec,nosuid,noatime,nodiratime,notail,nolog,block-allocator=noborder)
/dev/sdf1 on /cache/cache3 type reiserfs
(rw,noexec,nosuid,noatime,nodiratime,notail,nolog,block-allocator=noborder)

Server have 2 processors Intel(R) Xeon(TM) CPU 3.06GHz

squid compiled with epool options:
Squid Cache: Version 2.6.STABLE9
configure options: '--enable-storeio=diskd,aufs'
'--enable-removal-policies=heap' '--disable-wccp' '--enable-arp-acl'
'--enable-cache-digests' '--enable-default-err-language=Polish'
'--enable-linux-netfilter' '--disable-ident-lookups'
'--disable-hostname-checks' '--enable-underscores' '--enable-async-io'
'--enable-kill-parent-hack' '--enable-dlmalloc'
'--enable-xmalloc-statistics' '--enable-epoll' '--with-pthreads'
'--disable-poll' '--disable-select' '--disable-kqueue'
and top shows cpu usage between: 45-80%

General Runtime Information:
Squid Object Cache: Version 2.6.STABLE9
Start Time:Sat, 27 Jan 2007 22:25:44 GMT
Current Time:Sun, 28 Jan 2007 12:13:04 GMT

Connection information for squid:
   Number of clients accessing cache:600
   Number of HTTP requests received:1207836
   Number of ICP messages received:0
   Number of ICP messages sent:0
   Number of queued ICP replies:0
   Request failure ratio: 0.00
   Average HTTP requests per minute since start:1459.9
   Average ICP messages per minute since start:0.0
   Select loop called: 13632428 times, 3.641 ms avg
Cache information for squid:
   Request Hit Ratios:5min: 27.8%, 60min: 32.4%
   Byte Hit Ratios:5min: 3.2%, 60min: 7.5%
   Request Memory Hit Ratios:5min: 26.6%, 60min: 37.1%
   Request Disk Hit Ratios:5min: 17.4%, 60min: 19.5%
   Storage Swap size:2764200 KB
   Storage Mem size:262140 KB
   Mean Object Size:9.42 KB
   Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.09736  0.07825
   Cache Misses:  0.15888  0.15048
   Cache Hits:0.01035  0.00767
   Near Hits: 0.20843  0.08265
   Not-Modified Replies:  0.00919  0.00562
   DNS Lookups:   0.00669  0.00464
   ICP Queries:   0.0  0.0
Resource usage for squid:
   UP Time:49639.941 seconds
   CPU Time:7749.424 seconds
   CPU Usage:15.61%
   CPU Usage, 5 minute avg:68.05%
   CPU Usage, 60 minute avg:60.43%
   Process Data Segment Size via sbrk(): 380084 KB
   Maximum Resident Size: 0 KB
   Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   Total space in arena:  380084 KB
   Ordinary blocks:   372509 KB  30299 blks
   Small blocks:   0 KB  0 blks
   Holding blocks:  3744 KB  6 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:7574 KB
   Total in use:  376253 KB 98%
   Total free:  7574 KB 2%
   Total size:383828 KB
Memory accounted for:
   Total accounted:   335021 KB
   memPoolAlloc calls: 147275404
   memPoolFree calls: 145284658
File descriptor usage for squid:
   Maximum number of file descriptors:   8192
   Largest file desc currently in use:255
   Number of file desc currently in use:  155
   Files queued for open:   0
   Available number of file descriptors: 8037
   Reserved number of file descriptors:   100
   Store Disk files open:   0
   IO loop method: epoll
Internal Data Structures:
   296720 StoreEntries
62366 StoreEntries with MemObjects
62303 Hot Object Cache Items
   293437 on-disk objects

This is normal that cpu usage is high or something is wrong ?

Thanks for help
Robert



[squid-users] Client get HTTP Via header even if defined via off

2007-01-11 Thread Itzcak Pechtalt

Hi
squid.conf state the following:

#  TAG: via on|off
#   If set (default), Squid will include a Via header in requests and
#   replies.
#

However I get VIA header even if via off.  Why ?

thanks

Itzcak Pechtalt


Re: [squid-users] squid error storeAufsOpenDone: (1) Operation not permitted

2007-01-07 Thread Itzcak Pechtalt

Hi,

I got such errors when I changed cache_dir parameters in squid.conf.
It's because the old directory tree isn't suitable for new parameters,
and as result
some objects aren't cached.

After reinit of cache directory (delete and recreate with squid -z)
the errors disappeared.

Itzcak Pechtalt




On 1/7/07, Manoj Rajkarnikar [EMAIL PROTECTED] wrote:

Hi Henrik.

On Fri, 5 Jan 2007, Henrik Nordstrom wrote:

 fre 2007-01-05 klockan 12:01 +0545 skrev Manoj Rajkarnikar:
  yes.. the access rights shouldn't be the problem.. squid is caching fine.
  and the error is not only for this particular dir.

 What OS are you running?

CentOS 4.4.
kernel 2.6.18 manually compiled

[EMAIL PROTECTED] /]# squid -v
Squid Cache: Version 2.6.STABLE2
configure options: '--enable-snmp' '--prefix=/usr/local/squid'
'--enable-async-io' '--enable-storeio=ufs,diskd,aufs'
'--enable-removal-policies=lru,heap' '--enable-wccp'
'--disable-ident-lookup' '--enable-linux-netfilter'


 If RedHat/Fedora, then perhaps SELINUX is active preventing Squid from
 accessing the directory.


selinux is disabled.
from dmesg:

EXT3-fs: mounted filesystem with ordered data mode.
SELinux:  Disabled at runtime.
SELinux:  Unregistering netfilter hooks
audit(1167401254.252:2): selinux=0 auid=4294967295


 If SuSe, then perhaps Apparmor is doing the same...

 Or perhaps cache_effective_user is the problem.


I've not touched cache_effective_user/group configuration. defaults to
nobody.
from squid.conf:

#Default:
# cache_effective_user nobody

store dirs:
drwxr-xr-x   5 nobody nobody  4096 Jan  3 12:12 cache
drwxr-xr-x   4 nobody nobody  4096 Jan  3 12:12 cache1

[EMAIL PROTECTED] /]# ll cache*
cache:
total 24
drwxr-xr-x  18 nobody nobody  4096 Jan  3 12:16 cache
drwxr-xr-x  18 nobody nobody  4096 Jan  3 12:15 cache1
drwx--   2 nobody nobody 16384 Dec 29 12:50 lost+found

cache1:
total 8
drwxr-xr-x  18 nobody nobody 4096 Jan  3 12:16 cache
drwxr-xr-x  18 nobody nobody 4096 Jan  3 12:15 cache1


 Are you really sure your Squid is caching fine, and not only caching in
 memory?

100% affirmative.
from cachemgr:

Cache information for squid:
   Request Hit Ratios: 5min: 47.0%, 60min: 46.9%
   Byte Hit Ratios:5min: 38.1%, 60min: 30.1%
   Request Memory Hit Ratios:  5min: 21.3%, 60min: 20.6%
   Request Disk Hit Ratios:5min: 35.6%, 60min: 31.6%
   Storage Swap size:  17257844 KB
   Storage Mem size:   524180 KB
   Mean Object Size:   14.04 KB

Store Directory Statistics:
Store Entries  : 1237990
Maximum Swap Size  : 24576000 KB
Current Store Swap Size: 17320752 KB
Current Capacity   : 70% used, 30% free

Store Directory #0 (aufs): /cache/cache
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 8192000 KB
Current Size: 7352656 KB
Percent Used: 89.75%
Filemap bits in use: 527659 of 1048576 (50%)
Filesystem Space in use: 8969764/18930908 KB (47%)
Filesystem Inodes in use: 638893/4886400 (13%)
Flags:
Removal policy: heap

Store Directory #1 (aufs): /cache/cache1
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 4096000 KB
Current Size: 1299840 KB
Percent Used: 31.73%
Filemap bits in use: 103074 of 131072 (79%)
Filesystem Space in use: 8969764/18930908 KB (47%)
Filesystem Inodes in use: 638893/4886400 (13%)
Flags:
Removal policy: heap

Store Directory #2 (aufs): /cache1/cache
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 8192000 KB
Current Size: 7368448 KB
Percent Used: 89.95%
Filemap bits in use: 506308 of 1048576 (48%)
Filesystem Space in use: 12206044/74342972 KB (16%)
Filesystem Inodes in use: 692612/9453568 (7%)
Flags:
Removal policy: heap

Store Directory #3 (aufs): /cache1/cache1
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 4096000 KB
Current Size: 1299808 KB
Percent Used: 31.73%
Filemap bits in use: 98460 of 131072 (75%)
Filesystem Space in use: 12206044/74342972 KB (16%)
Filesystem Inodes in use: 692612/9453568 (7%)
Flags: SELECTED
Removal policy: heap


 Regards
 Henrik


Thanks.

--
Manoj Rajkarnikar

System Administrator
Vianet Communications Pvt Ltd
Pulchowk, Lalitpur, Nepal.
(PH)977-1-5546410




Re: [squid-users] how to get the max usable FD number squid used

2006-12-04 Thread Itzcak Pechtalt

Even if the compiled MAX_FD is large, you have to permit local squid to open
more then 1024 file descriptors.  The value depends in Linux system on
/etc/security/limits.conf, and maybe different for every installation,
or to be changed dynamically.
To chnage it on the fly run ulimit before runing squid.
for example:  ulimit -n 32000

Itzcak


On 12/4/06, Jm lists [EMAIL PROTECTED] wrote:

Hello members,

When Squid is running (I mean it has compiled by other one,not me),how
to get the max usable FD number squid used?

I ask this because we have some squid hosts running for the same
backend applications.But some hosts' established connections are very
large,some hosts' are small.All these hosts have the same hardware
env.So I think maybe the max FD owned by squid is not the same.

Please help me,thanks.