Re: [squid-users] improve flow capacity for Squid

2008-11-27 Thread Adrian Chadd
Is that per-flow, or in total?



Adrian

2008/11/24 Ken DBA [EMAIL PROTECTED]:
 Hello,

 I was just finding the flow capacity for Squid is too limited.
 It's even hard to reach an upper limit of 150 MBits.

 How can I improve the flow capacity for Squid in the reverse-proxy mode?
 Thanks in advance.

 Ken







Re: [squid-users] tuning an overloaded server

2008-11-27 Thread Adrian Chadd
Gah, they way they work is really quite simple.

* ufs does the disk io at the time the request happens. It used to try
using select/poll on the disk fds from what I can gather in the deep,
deep dark history of CVS but that was probably so the disk io happened
in the next IO loop so recursion was avoided.

* aufs operations push requests into a global queue which are then
dequeued by the aio helper threads as they become free. The aio helper
threads do the particular operation (open, close, read, write, unlink)
and then push the results into a queue so the main squid thread can
handle the callbacks at a later time.

* diskd operations push requests into a per storedir queue which is
then dequeued in order, one operation at a time, by the diskd helper.
The diskd helper does the normal IO operations (open, close, read,
write, unlink) and holds all the disk filedescriptors (ie, the main
squid process doesn't hold open the disk FDs; they're just given
handles.) The diskd processes do the operation and then queue the
result back to the main squid process which handles the callbacks at a
later time.

AUFS works great where the system threads allow for concurrent
blocking syscalls. This meant Linux (linuxthreads being just
processes) and Solaris in particular worked great. The BSDs used
userland threads via a threading library which wrapped syscalls to
try and be non-blocking. This wouldn't work for disk operations and so
a disk operation stalled all threads in a given process. diskd, as far
as I can gather (duane would know better!) came into existance to
solve a particular problem or two, and one of those problems was the
lack of scalable disk IO available in the BSDs.

FreeBSD in particular has since grown a real threading library which
supports disk IO happening across threads quite fine.

The -big- difference right now is how the various disk buffer cache
and VM systems handle IO. By default, the AUFS support in Squid only
uses the aio helper threads for a small subset of the operations. This
may work great under Linux but operations such as write() and close()
block under FreeBSD (think 'writing out metadata', for example) and
this mostly gives rise to the notion of Linux being better by most
people who haven't studied the problem in depth. :)

hope that helps,



Adrian

2008/11/27 Amos Jeffries [EMAIL PROTECTED]:
 B. Cook wrote:

 On Nov 22, 2008, at 7:30 AM, Amos Jeffries wrote:

 8 -- snip -- 8



 That said BSD family of systems get more out of diskd than aufs in
 current Squid.


 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2

 Hello,

 Sorry to bother..

 so even in any FreeBSD (6.3, 7.0, etc..) diskd is still better than aufs?

 and if so,

 http://wiki.squid-cache.org/Features/DiskDaemon

 this page talks about 2.4

 and I can't seem to find an aufs page.. I can find coss, but coss has been
 removed from 3.0..

 so again, diskd should be what FreeBSD users use?  As well as the kernel
 additions?  Even on 6.3 and 7.0 machines amd64 and i386 alike?

 Yes. We have some circumstantial info that leads to believe its probably a
 bug in the way Squid uses AUFS and the underlying implementation differences
 in FreeBSD and Linux. We have not yet had anyone investigate deeply and
 correct the issue. So its still there in all Squid releases.



 Thanks in advance..

 (I would think a wiki page on an OS would be very useful.. common configs
 for linux 2.x and BSD, etc.. )

 Many people are not as versed in squid as the developers, and giving them
 guidelines to follow would probably make it easier for them to use.. imho.

 They don't understand coss vs aufs vs diskd vs ufs.. ;)

 We are trying to get there :). It's hard for just a few people and
 non-experts in many areas at that. So if anyone has good knowledge of how
 AUFS works jump in with a feature page analysis.

 What we have so far in the way of config help is explained at
 http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-ad11ea76c4876a92aa1cf8fb395e7efd3e1993d5

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2




Re: [squid-users] Cache_dir more than 10GB

2008-11-27 Thread Adrian Chadd
2008/9/29 Amos Jeffries [EMAIL PROTECTED]:

  Squid-2 has issues with handling of very large individual files being
 somewhat slow.

Only if you have an insanely large cache_mem and
maximum_object_size_in_memory setting. Very large individual files on
disk are handled just as efficiently across all Squid versions.

If its kept low then it performs just fine.




Adrian


Re: [squid-users] improve flow capacity for Squid

2008-11-28 Thread Adrian Chadd
Well, the way to start looking at that is getting to know your system
profiling tools.

I do this for a living on Solaris, FreeBSD and Linux - each has
different system profiling tools, all of which can tell you where the
problem may lie.

Considering people have deployed Squid forward and reverse proxies
that achieve much more than 150mbit/sec, even considering the
shortcomings of the codebases, I can't help but think there's
something else going on that isn't specifically Squids' fault. :)


Adrian


2008/11/28 Ken DBA [EMAIL PROTECTED]:



 --- On Thu, 11/27/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]
 Subject: Re: [squid-users] improve flow capacity for Squid
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, November 27, 2008, 11:09 PM
 Is that per-flow, or in total?


 I mean in total, thanks.







Re: [squid-users] improve flow capacity for Squid

2008-11-28 Thread Adrian Chadd
Heh. The best way under unix is a hybrid of threads and epoll/kqueue
w/ non-blocking socket IO.



Adrian

2008/11/28 Ken DBA [EMAIL PROTECTED]:



 --- On Sat, 11/29/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]


 Considering people have deployed Squid forward and reverse
 proxies
 that achieve much more than 150mbit/sec, even considering
 the
 shortcomings of the codebases,

 Thanks. I also hope someone has deployed the high-flow application for squid 
 to give helps.

I can't help but think
 there's
 something else going on that isn't specifically
 Squids' fault. :)


 oh.I was thinking the flow capacity is limited, maybe due to squid's IO 
 select way? for example, it reads/writes socket using epoll/select/poll, not 
 the threads/multi-processes. Thanks.

 Ken







Re: [squid-users] assertion failed: store_swapout.cc:317: mem-swapout.sio == self

2008-11-28 Thread Adrian Chadd
Does Squid-2.7.STABLE5 exhibit this issue?



Adrian


2008/11/28 Marcel Grandemange [EMAIL PROTECTED]:
 Looks like squid broke itself again.
 If anybody could advise me as to what's happening here it would be great.

 Im thinking the move to v3 has been disasterous so far.

 Every time is now use our main proxy the following happens..

 2008/11/29 03:10:19|   Validated 1285147 Entries
 2008/11/29 03:10:19|   store_swap_size = 25682924
 2008/11/29 03:10:19| storeLateRelease: released 0 objects
 2008/11/29 03:11:08| assertion failed: store_swapout.cc:317:
 mem-swapout.sio == self
 2008/11/29 03:11:17| Starting Squid Cache version 3.0.STABLE9 for
 amd64-portbld-freebsd7.0...
 2008/11/29 03:11:17| Process ID 32313
 2008/11/29 03:11:17| With 11072 file descriptors available
 2008/11/29 03:11:17| DNS Socket created at 0.0.0.0, port 63464, FD 7
 2008/11/29 03:11:17| Adding nameserver 127.0.0.1 from squid.conf
 2008/11/29 03:11:17| Adding nameserver 192.168.12.2 from squid.conf
 2008/11/29 03:11:17| Adding nameserver 192.168.12.3 from squid.conf
 2008/11/29 03:11:17| Unlinkd pipe opened on FD 12
 2008/11/29 03:11:17| Swap maxSize 71925760 KB, estimated 4795050 objects
 2008/11/29 03:11:17| Target number of buckets: 239752
 2008/11/29 03:11:17| Using 262144 Store buckets
 2008/11/29 03:11:17| Max Mem  size: 131072 KB
 2008/11/29 03:11:17| Max Swap size: 71925760 KB
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /mnt/cache1 (DIRTY)
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /mnt/cache2 (DIRTY)
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /usr/local/squid/cache (DIRTY)
 2008/11/29 03:11:22| Using Round Robin store dir selection
 2008/11/29 03:11:22| Set Current Directory to /usr/local/squid/cache
 2008/11/29 03:11:23| Loaded Icons.
 2008/11/29 03:11:23| Accepting  HTTP connections at 192.168.12.1, port 3128,
 FD 18.
 2008/11/29 03:11:23| Accepting  HTTP connections at 127.0.0.1, port 8080, FD
 19.
 2008/11/29 03:11:23| Accepting transparently proxied HTTP connections at
 127.0.0.1, port 3128, FD 20.
 2008/11/29 03:11:23| HTCP Disabled.
 2008/11/29 03:11:23| Accepting SNMP messages on port 3401, FD 21.
 2008/11/29 03:11:23| Configuring Parent 192.168.12.2/3128/3130
 2008/11/29 03:11:23| Ready to serve requests.
 2008/11/29 03:11:23| Store rebuilding is 3.48% complete
 2008/11/29 03:11:28| Done reading /mnt/cache1 swaplog (117800 entries)
 2008/11/29 03:11:28| Done reading /mnt/cache2 swaplog (117807 entries)


 It keeps crashing when you visit pages and reloading.. Input?
 Stable10 had other issues that prevents me from using it.




Re: [squid-users] TCP connections keep alive problem after 302 HTTP response from web

2008-11-30 Thread Adrian Chadd
Good detective work! I'm not sure whether this is a requirement or
not. Henrik would know better.

Henrik, is this worthy of a bugzilla report?


adrian

2008/11/30 Itzcak Pechtalt [EMAIL PROTECTED]:
 Hi

 I found some inefficiency in Squid TCP connection handling toward servers .
 In some cases Squid closes TCP connection to servers immediately after
 304 not modified response and doesn't save it for reuse.
 There is no visbible reason why Squid closes the connection. Squid
 Sends Connection: Keep-Alive in HTTP request and the web server
 returns Connection: Keep-Alive on the response, Also pconn_timeout
 is configured to 1 minute.

 After digging into the problem,  I found that the the problem occurs
 only in cases the object type is PRIVATE. It seems like when
 client_side code hadnles 304 not modified reply it calls
 store_unregister which closes store entry and TCP connection in turn.

 To reproduce it do the following
 1) Browse www.cnn.com
 2) Delete browser cache.
 3) Browse again. The case will occur here.

 Does someone know about it ?

 Itzcak

 Following short Wireshark sniff with 1 sample, 10.50.0.100 is Squid
 IP. See FIN packet from Squid.

 0.00  10.50.0.100 - 205.128.90.126 TCP 4006  http [SYN] Seq=0
 Len=0 MSS=1460 WS=2
 0.085216 205.128.90.126 - 10.50.0.100  TCP http  4006 [SYN, ACK]
 Seq=0 Ack=1 Win=5840 Len=0 MSS=1460 WS=7
 0.085226  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=1
 Ack=1 Win=5840 Len=0
 0.085230  10.50.0.100 - 205.128.90.126 HTTP GET
 /cnn/.element/css/2.0/common.css HTTP/1.0
 GET /cnn/.element/css/2.0/common.css HTTP/1.0
 If-Modified-Since: Tue, 16 Sep 2008 14:48:32 GMT
 Accept: */*
 Referer: http://www.cnn.com/
 Accept-Language: en-us
 UA-CPU: x86
 User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET
 CLR 2.0.50727; .NET CLR 3.0.04506.30)
 Host: i.cdn.turner.com
 Cache-Control: max-age=259200
 Connection: keep-alive

 0.172250 205.128.90.126 - 10.50.0.100  TCP http  4006 [ACK] Seq=1
 Ack=366 Win=6912 Len=0
 0.172934 205.128.90.126 - 10.50.0.100  HTTP HTTP/1.1 304 Not Modified
 HTTP/1.1 304 Not Modified
 Date: Wed, 26 Nov 2008 12:33:33 GMT
 Expires: Wed, 26 Nov 2008 13:03:51 GMT
 Last-Modified: Tue, 16 Sep 2008 14:48:32 GMT
 Cache-Control: max-age=3600
 Connection: keep-alive

 0.173145  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=366
 Ack=206 Win=6912 Len=0
 0.173238  10.50.0.100 - 205.128.90.126 TCP 4006  http [FIN, ACK]
 Seq=366 Ack=206 Win=6912 Len=0
 0.259520 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
 Seq=206 Ack=367 Win=6912 Len=0
 0.259906  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=367
 Ack=207 Win=6912 Len=0
 0.565702 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
 Seq=206 Ack=367 Win=6912 Len=0
 0.565842  10.50.0.100 - 205.128.90.126 TCP [TCP Dup ACK 10#1] 4006 
 http [ACK] Seq=367 Ack=207 Win=6912 Len=0




Re: [squid-users] Number of Spindles

2008-12-05 Thread Adrian Chadd
Things have changed somewhat since that algorithm was decided upon.

Directory searches were linear and the amount of buffer cache /
directory name cache available wasn't huge.

Having large directories took time to search and took RAM to cache.

Noone's really sat down and done any hard-core tuning - or at least,
they've done it, but haven't published the results anywhere. :)



Adrian

2008/12/3 Nyamul Hassan [EMAIL PROTECTED]:
 Why aren't there any (or marginal / insignificant) improvements over 3
 spindles?  Is it because squid is a single threaded application?

 On this note, what impact does the L1 and L2 directories have on AUFS
 performance?  I understand that these are there to control the number of
 objects in each folder.  But, what would be a good number of files to keep
 in a directory, performance wise?

 Regards
 HASSAN



 - Original Message - From: Amos Jeffries [EMAIL PROTECTED]
 To: Henrik Nordstrom [EMAIL PROTECTED]
 Cc: Nyamul Hassan [EMAIL PROTECTED]; Squid Users
 squid-users@squid-cache.org
 Sent: Monday, December 01, 2008 04:33
 Subject: Re: [squid-users] Number of Spindles


 sön 2008-11-30 klockan 09:56 +0600 skrev Nyamul Hassan:

 The primary purpose of these tests is to show that Squid's performance
 doesn't increase in proportion to the number of disk drives. Excluding
 other
 factors, you may be able to get better performance from three systems
 with
 one disk drive each, rather than a single system with three drives.

 There is a significant difference up to 3 drives in my tests.


 Um, can you clarify please? Do you mean difference in experience than
 described, or separate systems are faster up to 3 drives?

 Amos







Re: [squid-users] How to interrupt ongoing transfers?

2008-12-05 Thread Adrian Chadd
Someone may beat me to this, but I'm actually proposing a quote to a
company to implement quota services in Squid to support stuff just
like what you've asked for.

I'll keep the list posted about this. Hopefully I'll get the green
light in a week or so and can begin work on implementing the
functionality in Squid-2.

Thanks,



Adrian

2008/12/5 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi,

 I am a squid newbie. I am trying to set up daily download quotas for NCSA 
 authorized users. I have a daemon running which checks the log files, and 
 whnever the download limit is reached (for a particular user), it blocks that 
 user in the config and reconfigures squid (squid -k reconfigure) for the 
 changes to take effect.

 The problem is, if an http/ftp transfer is on (for that user), the changes 
 made in the config doesnt take effect until that transfer session completes.

 Is there any way I can interrupt the transfer somehow (or say, force squid to 
 re-read its ACL) without affecting sessions of other users?

 Thanks  Regards,
 Kaustav Dey Biswas



  Add more friends to your messenger and enjoy! Go to 
 http://messenger.yahoo.com/invite/




Re: [squid-users] Number of Spindles

2008-12-06 Thread Adrian Chadd
2008/12/5 Nyamul Hassan [EMAIL PROTECTED]:
 Thx for the response Adrian.  Earlier I was using only AUFS on each drive,
 and the system choked on IOWait above 200 req/sec.  But, after I added COSS
 in the mix, it improved VASTLY.

Well, thats why its there, right? :)



 Since you're the COSS expert, I would really love to hear about what you
 think of my configuration options for COSS above.  Do you think I can
 improve them?

Not off the top of my head, no.

 As for L1 and L2 numbers in AUFS, can you suggest any benchmark tests which
 I can run and give you feedback?

Again, not off the top of my head. I'd look at trying to gather stats
on various types of memory usage and IO patterns and do some
statistical comparisons. I've been focusing on different areas lately
so I'm not really in the storage headspace right now :)

 Also, if anybody else can share their ideas / experience, it would be great!
 I'm a bit puzzled about the following:

 1.  Although I've set the cache_replacement_policy as differentl to each
 other (GDSF for COSS and LFUDA for AUFS), as has been suggested by the HP
 whitepaper which is referenced in the config file, the Current Squid
 Configuratoin page in CacheMGR shows only LFUDA above all the 8 (eight)
 cache_store entries.  Does that mean all of them are LFUDA?  Isn't GDSF
 better for smaller objects?

COSS will always be LRU. Its the nature of the storage system itself.
You can't override that.


 2.  When I had only one type of storage (AUFS), it was easy to find out the
 average objects per cache_store.  However, now that I've 2 types on each of
 the 4 HDDs, I can't seem to find out how many of the total 11,000,000 plus
 objects that are being reported in CacheMGR are actually in the COSS and
 AUFS partitions.  Is there a way to find that out?

I thought that the storedir page listed the number of objects in the
cache. Hm, if it doesn't then it shouldn't be that difficult to patch
stuff in to track the number of objects in each storedir.



Adrian


Re: [squid-users] How to interrupt ongoing transfers?

2008-12-07 Thread Adrian Chadd
There isn't. Sorry.



Adrian


2008/12/7 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi Adrian,

 Thanks a lot for your prompt reply.

 Actually, I need to implement the quota system as a part of my final year 
 Engineering project. I am planning to make it as a sort of an add-on package 
 over Squid, which will be compatible with all current versions of Squid. As 
 you can see, modifying the Squid source code is not an option for me.

 Please let me know if there is any way (or workaround) by which I can 
 interrupt ongoing transfers in current versions of Squid without having to 
 patch  rebuild it.

 Thanks  Regards,
 Kaustav



 - Original Message 
 From: Adrian Chadd [EMAIL PROTECTED]
 To: Kaustav Dey Biswas [EMAIL PROTECTED]
 Cc: Squid squid-users@squid-cache.org
 Sent: Saturday, 6 December, 2008 12:28:10 AM
 Subject: Re: [squid-users] How to interrupt ongoing transfers?

 Someone may beat me to this, but I'm actually proposing a quote to a
 company to implement quota services in Squid to support stuff just
 like what you've asked for.

 I'll keep the list posted about this. Hopefully I'll get the green
 light in a week or so and can begin work on implementing the
 functionality in Squid-2.

 Thanks,



 Adrian

 2008/12/5 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi,

 I am a squid newbie. I am trying to set up daily download quotas for NCSA 
 authorized users. I have a daemon running which checks the log files, and 
 whnever the download limit is reached (for a particular user), it blocks 
 that user in the config and reconfigures squid (squid -k reconfigure) for 
 the changes to take effect.

 The problem is, if an http/ftp transfer is on (for that user), the changes 
 made in the config doesnt take effect until that transfer session completes.

 Is there any way I can interrupt the transfer somehow (or say, force squid 
 to re-read its ACL) without affecting sessions of other users?

 Thanks  Regards,
 Kaustav Dey Biswas



  Add more friends to your messenger and enjoy! Go to 
 http://messenger.yahoo.com/invite/




Re: [squid-users] What does storeClientCopyEvent mean?

2008-12-09 Thread Adrian Chadd
Its a hack which is done to defer a storage manager transaction from
beginning whilst another one is in progress for that same connection.

I'd suggest using your OS profiling to figure out where the CPU is
being spent. This may be a symptom, not the cause.


adrian

2008/12/7 Bin Liu [EMAIL PROTECTED]:
 Hi there,

 Squid is pegging CPU to 100% with storeClientCopyEvent and hit
 service time soar up to server seconds here. The following is what I
 see in cachemgr:events:

 OperationNext ExecutionWeightCallback Valid?
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 MaintainSwapSpace0.980990 seconds1N/A
 idnsCheckQueue1.00 seconds1N/A
 ipcache_purgelru5.457004 seconds1N/A
 wccp2HereIam5.464900 seconds1N/A
 fqdncache_purgelru5.754399 seconds1N/A
 storeDirClean10.767635 seconds1N/A
 statAvgTick59.831274 seconds1N/A
 peerClearRR110.539127 seconds0N/A
 peerClearRR279.341239 seconds0N/A
 User Cache Maintenance1610.136367 seconds1N/A
 storeDigestRebuildStart1730.225879 seconds1N/A
 storeDigestRewriteStart1732.267852 seconds1N/A
 peerRefreshDNS1957.777934 seconds1N/A
 peerDigestCheck2712.910515 seconds1yes

 So what does storeClientCopyEvent mean? Is it disk IO cause this problem?

 Regards,
 Liu




Re: [squid-users] What does storeClientCopyEvent mean?

2008-12-11 Thread Adrian Chadd
Which version of Squid are you using again? I patched the latest
Squid-2.HEAD with some aufs related fixes that reduce the amount of
callback checking which is done.

Uhm, check src/fs/aufs/store_asyncufs.h :

/* Which operations to run async */
#define ASYNC_OPEN 1
#define ASYNC_CLOSE 0
#define ASYNC_CREATE 1
#define ASYNC_WRITE 0
#define ASYNC_READ 1

Thats by default on Squid-2.HEAD. I've just changed them all to be
async under cacheboy-1.6 and this performs great under freebsd-7 +
AUFS with my testing.



Adrian

2008/12/11 Bin Liu binliu.l...@gmail.com:
 Thanks for your reply, Adrian. I'm very appreciated for your help.

 I'd suggest using your OS profiling to figure out where the CPU is
 being spent. This may be a symptom, not the cause.

 Here is the top output snapshot:

 last pid: 76181;  load averages:  1.15,  1.12,  1.08up 6+05:35:14  
 22:25:07
 184 processes: 5 running, 179 sleeping
 CPU states: 24.2% user,  0.0% nice,  3.8% system,  0.0% interrupt, 72.0% idle
 Mem: 4349M Active, 2592M Inact, 599M Wired, 313M Cache, 214M Buf, 11M Free
 Swap: 4096M Total, 4096M Free

  PID USERNAME   THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
 38935 nobody  27  440  4385M  4267M ucond  1 302:19 100.00% squid
 46838 root 1  440 24144K  2344K select 0   3:09  0.00% snmpd
  573 root 1  440  4684K   608K select 0   0:34  0.00% syslogd
  678 root 1  440 24780K  4360K select 1   0:12  0.00% perl5.8.8
  931 root 1  440 10576K  1480K select 0   0:11  0.00% sendmail
  871 root 1  440 20960K   508K select 3   0:08  0.00% sshd
  941 root 1   80  5736K   424K nanslp 2   0:03  0.00% cron
 14177 root 1  440 40620K  2648K select 0   0:02  0.00% httpd


  # iostat 1  5
  tty da0  da1  da2 cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   4   75 28.86   5  0.13  41.08  22  0.88  40.01  42  1.66   6  0  4  0 89
   0  230 22.86   7  0.16  21.33   6  0.12  44.78  23  1.00  23  0  4  0 73
   0   77 16.00   1  0.02  51.56  27  1.35  40.38  48  1.88  22  0  6  0 72
   0   77 16.00   8  0.12  18.29   7  0.12  26.64  22  0.57  24  0  3  0 72
   0   77 16.00   2  0.03  32.00   2  0.06  41.43  35  1.41  24  0  4  0 71

 # vmstat 1 5
  procs  memory  pagedisks faults  cpu
  r b w avmfre   flt  re  pi  pofr  sr da0 da1   in   sy
 cs us sy id
  1 2 0 13674764 386244   455   9   0   0   792 4672   0   0 4147 6847
 6420  6  4 89
  1 1 0 13674764 383112  1365   4   0   0   147   0   2   4 5678 9065
 16860 18  6 76
  1 1 0 13674764 383992   894   3   0   0   916   0   5   6 5089 7950
 16239 22  5 73
  1 1 0 13674764 378624  1399  11   0   052   0  11   1 5533 10447
 18994 23  5 72
  1 1 0 13674768 373360  1427   6   0   030   0   9   3 5919 10913
 19686 25  5 70


 ASYNC IO Counters:
 Operation   # Requests
 open2396837
 close   1085
 cancel  2396677
 write   3187
 read16721807
 stat0
 unlink  299208
 check_callback  800440690
 queue   14


 I've noticed that the counter 'queue'  is relatively high, which
 normally should always be zero. But the disks seems pretty idle. I've
 tested that by copying some large files to cache_dir, very fast. So
 there must be something blocking squid. I've got 2 boxes with the same
 hardware/software configuration running load balancing, when one of
 them was blocking, another one ran pretty well.

 I'm using FreeBSD 7.0 + AUFS, and I've also noticed what you have
 written several days ago
 (http://www.squid-cache.org/mail-archive/squid-users/200811/0647.html),
 which mentions that some operations may  block under FreeBSD. So could
 that cause this problem?

 Thanks again.

 Regards,
 Liu


 On Tue, Dec 9, 2008 at 23:28, Adrian Chadd adr...@squid-cache.org wrote:
 Its a hack which is done to defer a storage manager transaction from
 beginning whilst another one is in progress for that same connection.

 I'd suggest using your OS profiling to figure out where the CPU is
 being spent. This may be a symptom, not the cause.


 adrian

 2008/12/7 Bin Liu binliu.l...@gmail.com:
 Hi there,

 Squid is pegging CPU to 100% with storeClientCopyEvent and hit
 service time soar up to server seconds here. The following is what I
 see in cachemgr:events:

 OperationNext ExecutionWeightCallback Valid?
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes

Re: [squid-users] Performance problems with 2.6.STABLE18

2008-12-17 Thread Adrian Chadd
2008/12/17 Mark Kent mk...@messagelabs.com:

 I tried running under valgrind, and it found a couple of leaks, but I'm
 not sure that those are strictly the problem. If it were a traditional
 memory leak, where memory was just wandering off, I don't quite see why
 the CPU would climb along with the memory usage.

Grab oprofile and do some digging?


Adrian


 Mark.



 -Original Message-
 From: Kinkie [mailto:gkin...@gmail.com]
 Sent: Wednesday, December 17, 2008 4:50 PM
 To: Mark Kent
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Performance problems with 2.6.STABLE18

 On Wed, Dec 17, 2008 at 4:24 PM, Mark Kent mk...@messagelabs.com
 wrote:

  Hi,

  I'm currently having a performance issue with Squid 2.6.STABLE18
 (running on RHEL4). As I run traffic through the proxy, the memory
 grows steadily, and apparently without limit. This increase in memory
 usage is coupled with a steadily growing CPU usage, up to a point at
 which a single core is saturated (97% usage at ~400MB of RSS). At this

 point, the latency of requests increases. When the load is taken off
 the proxy, the CPU returns to minimal usage, but the memory usage
 sticks at the high water mark.

  I should point out that I'm using squid for authentication only (HTTP

 digest), not for caching. Consequently, I have maximum_object_size and

 maximum_object_size_in_memory both set to 0 in the squid config file.
 My understanding is that this should be sufficient to stop squid from
 caching.

  There's plenty of spare physical RAM on the machine, so it seems
 unlikely that it's a memory shortage causing the performance problem.
 My interpretation is that something has gotten too large for Squid to
 handle but, without object caching, it's not clear to me what that
 might be. I would blame the authentication cache, but there's only
 2000 different users.

  Does anyone have an idea what might be going on, and how to fix it?

 There may  be a memory leak somewhere..
 Squid 2.6 is rather old, can you try upgrading to the last 2.7 STABLE
 release?


Kinkie

 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __

 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __




Re: [squid-users] storeurl_rewrite and ICP

2008-12-18 Thread Adrian Chadd
Nope, I don't think the storeurl-rewriter stuff was ever integrated into ICP.

I think someone posted a patch to the squid bugzilla to implement this.

I'm happy to commit whatever people sensibly code up and deploy. :)



Adrian

2008/12/18 Imri Zvik im...@bsd.org.il:
 Hi,

 I'm using the storeurl_rewrite feature to store content with changing
 attributes.

 As my traffic grows, I want to be able to add cache_peers to share the load.

 After configuring the peers, I've found out that all my ICP queries results
 with misses.
 It seems like the storeurl_rewrite logic is not implemented in the ICP
 queries - i.e., nor the ICP client or the server passes the URL through the
 storeurl_rewrite process before checking if the requested content is cached
 or not.

 Am I missing something?



 Thank you in advance,




Re: [squid-users] cached MS updates !

2008-12-21 Thread Adrian Chadd
The one thing I've been looking to do for other updates is to
post-process store.log and find URLs which have been partial-replied
to (206) ending in various extensions, then queuing entire file
fetches of them to make sure they fully enter the cache.

Its suboptimal but it seems to work just fine.



adrian

2008/12/21 Oleg Motienko motie...@gmail.com:
 On Tue, Jun 17, 2008 at 1:24 AM, Henrik Nordstrom
 hen...@henriknordstrom.net wrote:
 On mån, 2008-06-16 at 08:16 -0700, pokeman wrote:
 thanks henrik for you reply
 any other way to save bandwidth windows updates almost use 30% of my entire
 bandwidth

 Microsoft has a update server you can run locally. But you need to have
 some control over the clients to make them use this instead of windows
 update...

 Or you could look into sponsoring some Squid developer to add caching of
 partial objects with the goal of allowing http access to windows update
 to be cached. (the versions using https can not be done much about...)

 I made such caching by removing headers Range from requests
 (transparent redirect to nginx webserver in proxy mode before squid).
 Works fine for my  ~ 1500 users. Cache size is 4G for now and growing.
 Additionally It's possible to make static cache (I made it on the same
 nginx, via proxy_store), so big files like servicepacks will be stored
 in filesystem. Also it's possible to put in filesystem already
 downloaded servicepacks and fixes, this will save the bandwidth.

 Squid is running transparent port on http://127.0.0.1:1 .
 Http requests from LAN to windowupdate networks are redirected to 127.0.0.4:80

 Nginx caches cab exe psf and cuts off Range header, other requests
 redirected to MS sites;

 Here is nginx config for caching site:

server {
listen127.0.0.4:80;
server_name  au.download.windowsupdate.com
 www.au.download.windowsupdate.com;

access_log
 /var/log/nginx/access-au.download.windowsupdate.com-cache.log  main;


 # root url - don't cache here

location /  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


 # ? urls - don't cache here
location ~* \?  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


 # here is static caching

location ~* ^/msdownload.+\.(cab|exe|psf)$ {
root /.1/msupd/au.download.windowsupdate.com;
error_page   404 = @fetch;
}


location @fetch {
internal;

proxy_passhttp://127.0.0.1:1;
proxy_set_header   Range'';
proxy_set_header   Host $host;

proxy_store  on;
proxy_store_access   user:rw  group:rw  all:rw;
proxy_temp_path  /.1/msupd/au.download.windowsupdate.com/temp;

root /.1/msupd/au.download.windowsupdate.com;
}

 # error messages (if got err from squid)

error_page   500 502 503 504  /50x.html;
location = /50x.html {
root   html;
}


}



Re: [squid-users] storeurl_rewrite and ICP

2008-12-24 Thread Adrian Chadd
Thanks. Be sure to comment on the bugzilla ticket too.

Oh and tell me which bug it is so I can make sure I'm watching it. :)


Adrian

2008/12/23 Imri Zvik im...@bsd.org.il:
 On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
 Hi,

 On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote:
  Nope, I don't think the storeurl-rewriter stuff was ever integrated into
  ICP.
 
  I think someone posted a patch to the squid bugzilla to implement this.

 If you can point me to said patch, I'd be happy to test it under load.

  I'm happy to commit whatever people sensibly code up and deploy. :)
 
 
 
  Adrian
 
  2008/12/18 Imri Zvik im...@bsd.org.il:
   Hi,
  
   I'm using the storeurl_rewrite feature to store content with changing
   attributes.
  
   As my traffic grows, I want to be able to add cache_peers to share the
   load.
  
   After configuring the peers, I've found out that all my ICP queries
   results with misses.
   It seems like the storeurl_rewrite logic is not implemented in the ICP
   queries - i.e., nor the ICP client or the server passes the URL through
   the storeurl_rewrite process before checking if the requested content
   is cached or not.
  
   Am I missing something?
  
  
  
   Thank you in advance,

 Thanks!


 I've found the said patch in squid's bugzilla - It seems to be working, but
 I'm going to test the patch under load (700 mbit~) and report back.





Re: [squid-users] storeurl_rewrite and ICP

2008-12-24 Thread Adrian Chadd
I'm still not sure whether the correct behaviour is to send ICP for
the rewritten URL, or to rewrite the URLs being received before
they're looked up.

Hm!



Adrian

2008/12/24 Imri Zvik im...@bsd.org.il:
 On Wednesday 24 December 2008 17:01:39 Adrian Chadd wrote:
 Thanks. Be sure to comment on the bugzilla ticket too.

 Oh and tell me which bug it is so I can make sure I'm watching it. :)


 Adrian

 2008/12/23 Imri Zvik im...@bsd.org.il:
  On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
  Hi,
 
  On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote:
   Nope, I don't think the storeurl-rewriter stuff was ever integrated
   into ICP.
  
   I think someone posted a patch to the squid bugzilla to implement
   this.
 
  If you can point me to said patch, I'd be happy to test it under load.
 
   I'm happy to commit whatever people sensibly code up and deploy. :)
  
  
  
   Adrian
  
   2008/12/18 Imri Zvik im...@bsd.org.il:
Hi,
   
I'm using the storeurl_rewrite feature to store content with
changing attributes.
   
As my traffic grows, I want to be able to add cache_peers to share
the load.
   
After configuring the peers, I've found out that all my ICP queries
results with misses.
It seems like the storeurl_rewrite logic is not implemented in the
ICP queries - i.e., nor the ICP client or the server passes the URL
through the storeurl_rewrite process before checking if the
requested content is cached or not.
   
Am I missing something?
   
   
   
Thank you in advance,
 
  Thanks!
 
  I've found the said patch in squid's bugzilla - It seems to be working,
  but I'm going to test the patch under load (700 mbit~) and report back.

 Here is the bug report: http://www.squid-cache.org/bugs/show_bug.cgi?id=2354

 The patch works flawlessly so far.




Re: [squid-users] HTTP_HEADER

2009-01-07 Thread Adrian Chadd
No, I don't think it can.

I'm just wrapping up some changes to FreeBSD-current and my Squid fork
to support tproxy-like functionality under FreeBSD + ipfw.



Adrian

2009/1/7 Mehmet ÇELİK r...@justunix.org:

 As per usual, the easiest fix is to re-write the web app properly.
 The REMOTE_ADDR is taken by PHP from the network layer below everything.

 Otherwise you will have to patch your kernel and use the tproxy feature
 of Squid.

 Amos
 --
 Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE11
 Current Beta Squid 3.1.0.3


 I understand you and thanx.. But, I am using OpenBSD-PF. So Squid, can it
 provide linux-tproxy support for OpenBSD-PF ?  I don't know ??

 Regards,
 Mehmet CELIK



[squid-users] FreeBSD users: 'squidstats' package

2009-01-10 Thread Adrian Chadd
Hi guys,

Those of you who are using FreeBSD should have a look at squidstats.
Its based on Henrik's scripts to gather basic statistics from Squid
via SNMP and graph them. Its based on a googlecode project I created
and I'm also the port maintainer. So it should be easy for me to fix
bugs. :)

Having statistics of your running server is the best thing to do for
debugging and provisioning, so please consider installing the package
and setting it up.

Enjoy!


Adrian


Re: [squid-users] COSS causing squid Segment Violation on FreeBSD 6.2S (store_io_coss.c)

2009-01-17 Thread Adrian Chadd
2009/1/15 Mark Powell m.s.pow...@salford.ac.uk:

  Did you manage to get that FreeBSD 7 server working with COSS?

 Well did you :)

Yes. At least in testing. I don't (yet) have a client running
FreeBSD-7 and using COSS.

  This problem still exists in the latest squid. Any likelihood of a fix, or
 is COSS not recommended for FBSD?
  Thanks for your time.

FreeBSD-7 (and FreeBSD-current) + AUFS + COSS works fine in
Squid-2.HEAD at least in a polygraph polymix-4 workload. Admittedly
I've been testing it in Cacheboy-1.6 rather than Squid-2.HEAD, but the
COSS code should be the same as far as this bug is concerned.

The fact that you have many pending relocate errors may mean something
else is busted, but as far as I can tell you're the only person who
has reported that COSS problem in particular.

Not that COSS is a fantastically clean codebase to begin with; a lot
of hacking went into it to properly support async disk IO and thus
perform with any semblence of working well. its possible there's a bug
which I just haven't seen in production.

2c,


Adrian


 adrian


 2008/9/12 Mark Powell m.s.pow...@salford.ac.uk:

 On Fri, 12 Sep 2008, Amos Jeffries wrote:

 Can you report a bug on this please, so we don't forget it. with a
 stack
 trace when the crash is occuring.

 Already did, last year:

 http://www.squid-cache.org/bugs/show_bug.cgi?id=1944

 Does this mean that COSS can't be successfully used with FreeBSD 7?
  Many thanks.

 --
 Mark Powell - UNIX System Administrator - The University of Salford
 Information Services Division, Clifford Whitworth Building,
 Salford University, Manchester, M5 4WT, UK.
 Tel: +44 161 295 6843  Fax: +44 161 295 5888  www.pgp.com for PGP key







 --
 Mark Powell - UNIX System Administrator - The University of Salford
 Information Services Division, Clifford Whitworth Building,
 Salford University, Manchester, M5 4WT, UK.
 Tel: +44 161 295 6843  Fax: +44 161 295 5888  www.pgp.com for PGP key




[squid-users] squidtools collection

2009-01-18 Thread Adrian Chadd
Hi everyone,

Just letting you all know that I'm (slowly) tidying up and uploading
the various squid related tools that I've written over the years (the
ones I can find / release :) into another googlecode project.

The url is: http://code.google.com/p/squidtools/

There's not much there at the moment. There's a simple redirector for
URL filtering/rewriting (thanks to a support contract client who
needed something stable to replace what he was using!) and my example
external_acl helper which implements filtering against the
phishtank.com blacklist.

If anyone has any interesting squid tools they'd like to include in
the squidtools code project then please let me know. I'd like to
eventually have the whole collection available as a single set which
can be packaged up and installed together to enhance existing and new
Squid (and cacheboy :) installations.

Thanks,


Adrian


[squid-users] squidtools rewriter and substistution support

2009-01-20 Thread Adrian Chadd
hi everyone,

Someone posted a request here a few days ago for how to convince
squirm to use parts of a regular expression match in a rewritten
URL. This isn't a new request, and I've done it a bunch of times in my
own rewriters, but I figured I should get around to doing it in a
generic way so I don't have to keep re-inventing the wheel.

So now my rewriter supports using matches in the rewritten URL. This
means a rule as such

http://www.foo.com/(.*)$ http://bar.com/$1

.. will work.

I'm currently in the process of rewriting some of my Youtube rules to
use my URL rewriter now instead of a custom bit of perl code.

Hopefully having this simple rewriter out there will tease a few of
you to start using it and sharing configuration file snippets, which
is a whole lot easier than trying to share rewriter code. :)

Have fun,


Adrian
(http://code.google.com/p/squidtools/)


Re: [squid-users] (help!) strange things about maximum_object_size_in_memory in squid.conf

2009-01-20 Thread Adrian Chadd
If it hasn't been swapped out to disk, the object has to stay in RAM
until the client(s) currently fetching from it have fetched enough for
part of the object (ie, the stuff at the beginning which has been sent
to clients) to be freed.



Adrian


2009/1/20 Taehwan Weon taehwan.w...@gmail.com:
 Hi,

 I am using squid 2.6.STABLE 21 on linux.
 my squid.conf has the following settings:
  maximum_object_size_in_memory 10 KB
  cache_mem 3072 MB
  maximum_object_size   2000 MB
  minimum_object_size  0

 After running squid for more than 1 months, I ran 'squidclient 
 mgr:vm_objects' to
 look at the transit/hot object size.

 Even if I SET the maximum in-memory object size to 10 KB,
 squid HAD the following objects!  (the diff of inmem_lo and inmem_hi is 299KB)


 KEY CD0154B911563741E3E69CDB2E2D6FF0
  GET http://images.test.com/test_data/61/99/319.jpg
  STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
  CACHABLE,DISPATCHED,VALIDATED
  LV:1232416172 LU:1232417107 LM:1231909989 EX:-1
  0 locks, 0 clients, 6 refs
  Swap Dir -1, File 0X
  inmem_lo: 0
  inmem_hi: 299187
  swapout: 0 bytes queued


 In Squid, The Definitive Guide published by O'Reilly,
 maximum_object_size_in_memory is the diff of inmem_lo and inmem_hi.
 But the real implementation is seemed to be strange.

 Any help will be highly appreciated.

 Thanks  in advance.

 Tawan Won




Re: [squid-users] (help!) strange things about maximum_object_size_in_memory in squid.conf

2009-01-21 Thread Adrian Chadd
Then it may be a bug. :)



Adrian

2009/1/20 Tawan Won taehwan.w...@gmail.com:
 As you see the out of object dump in my previous mail, there is no client
 fetching the object.
 If an object has clients fetching it, object dump should print out the
 client list information too, if any.
 In addition, at the dump time,  squid had no client connection.




 -Original Message-
 From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf Of
 Adrian Chadd
 Sent: Wednesday, January 21, 2009 10:42 AM
 To: taehwan.w...@gmail.com
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] (help!) strange things about
 maximum_object_size_in_memory in squid.conf

 If it hasn't been swapped out to disk, the object has to stay in RAM
 until the client(s) currently fetching from it have fetched enough for
 part of the object (ie, the stuff at the beginning which has been sent
 to clients) to be freed.



 Adrian


 2009/1/20 Taehwan Weon taehwan.w...@gmail.com:
 Hi,

 I am using squid 2.6.STABLE 21 on linux.
 my squid.conf has the following settings:
  maximum_object_size_in_memory 10 KB
  cache_mem 3072 MB
  maximum_object_size   2000 MB
  minimum_object_size  0

 After running squid for more than 1 months, I ran 'squidclient 
 mgr:vm_objects' to
 look at the transit/hot object size.

 Even if I SET the maximum in-memory object size to 10 KB,
 squid HAD the following objects!  (the diff of inmem_lo and inmem_hi is
 299KB)


 KEY CD0154B911563741E3E69CDB2E2D6FF0
  GET http://images.test.com/test_data/61/99/319.jpg
  STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
  CACHABLE,DISPATCHED,VALIDATED
  LV:1232416172 LU:1232417107 LM:1231909989 EX:-1
  0 locks, 0 clients, 6 refs
  Swap Dir -1, File 0X
  inmem_lo: 0
  inmem_hi: 299187
  swapout: 0 bytes queued


 In Squid, The Definitive Guide published by O'Reilly,
 maximum_object_size_in_memory is the diff of inmem_lo and inmem_hi.
 But the real implementation is seemed to be strange.

 Any help will be highly appreciated.

 Thanks  in advance.

 Tawan Won






Re: [squid-users] Frequent cache rebuilding

2009-01-22 Thread Adrian Chadd
2009/1/21 Amos Jeffries squ...@treenet.co.nz:

 Yes it can. Squid's passing through of large objects is much more
 efficient than its pass-thru of small objects. A few dozen clients
 simultaneously grabbing movies or streaming through a CONNECT request can
 saturate a multi-GB link network buffer easily.
 A dual-core Xeon _should_ be able to saturate a 10GB link with all clients
 online.

Has anyone tried this?

The last time I tried multi-gige with Squid, it didn't really hit
anywhere near 10GE with streaming data because current kernels are
optimised with the idea that people will write concurrent software,
and so will run multiple threads to do the socket and network stuff
(copyin/copyout/tcp/ip stuff, with a kernel thread handling part of
the NIC stuff and potentially some of the TX/RX.)




Adrian


Re: [squid-users] cache_mem

2009-01-22 Thread Adrian Chadd
2009/1/22 Amos Jeffries squ...@treenet.co.nz:

 How intensive is intensive? At the moment squid is averaging a mere 2.4%
 processor time.

 IIRC older Squid-2 had to step a linked-list the length of the object in 4KB
 chunks to perform one of the basic operations (network write I think).

Yeah - the memory cache in Squid-2 was really only initially designed
as a sort of data pipeline between the server, the store, and the
client-side. It sort of grew the stuff needed to be a memory cache
by virtue, iirc, of wanting to support one incoming stream - multiple
client retrievals without having to always go via the disk store for
it.

Unless you need the extra boost it gives you in very specific circumstances:

* use low cache_mem; but if you notice that you're hitting the disk often;
* use a larger cache_mem; but keep maximum_object_size_in_memory down
to around 64k

Squid-3 sort of fixed this. It wasn't ever fully fixed, much like
how the problem could be fixed in Squid-2 if someone wanted to do the
slight trickery required.



Adrian


[squid-users] Resigning from squid-core

2009-01-31 Thread Adrian Chadd
Hi all,

It's been a tough decision, but I'm resigning from any further active
role in the Squid core group and cutting back on contributing towards
Squid development.

I'd like to wish the rest of the active developers all the best in the
future, and thank everyone here for helping me develop and test my
performance and feature related Squid work.



Adrian


Re: [squid-users] Scalability in serving large ammount of concurrent requests

2009-05-02 Thread Adrian Chadd
it means they didn't bother investigating the problem and reporting
back to squid-users/squid-dev.

They may find that Squid-2.7 (and my squid-2 fork) perform a ton
better over whatever version they tried.

I'm trying to continue benchmarking my local Squid-2 fork against
simulated lots of concurrent sessions but the main problem is
finding free/open tools to simulate internet traffic levels.
Polygraph just can't simulate that many concurrent requests at a
decent enough traffic rate without significant equipment investment. I
have this nasty feeling I'm going to have invent my own..

2c,


Adrian

2009/5/2 Roy M. setesting...@gmail.com:
 In http://highscalability.com/youtube-architecture , under Serving
 Thumbnails, it said:

 .
 - Used squid (reverse proxy) in front of Apache. This worked for a
 while, but as load increased performance eventually decreased. Went
 from 300 requests/second to 20.
 .

 So does it mean squid is not suitable for serving large ammount of
 concurrent requests (as compare to apache)


 Thanks.




[squid-users] /dev/poll solaris 10 fixes

2009-05-03 Thread Adrian Chadd
I'm giving my /dev/poll (Solaris 10) code a good thrashing on some
updated Sun hardware. I've fixed one silly bug of mine in 2.7 and
2.HEAD.

If you're running Solaris 10 and not using the /dev/poll code then
please try out the current CVS version(s) or wait for tomorrow's
snapshots.

I'll commit whatever other fixes are needed in this environment here :)

Thanks,


Adrian


Re: [squid-users] WCCP return method

2009-05-05 Thread Adrian Chadd
Squid doesn't currently implement any smarts for the WCCPv2 return path.



Adrian

2009/5/6 kgardenia42 kgardeni...@googlemail.com:
 On Fri, May 1, 2009 at 5:28 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 kgardenia42 wrote:

 On 4/30/09, Ritter, Nicholas nicholas.rit...@americantv.com wrote:

 * WCCP supports a return method for packets which the web-cache
 decides to reject/return.  Does squid support this?  I see that the
 return method can be configured in squid but is the support for
 returning actually there?

 I dunno about this one.

 Does anyone know the answer to this?  I'd just like to know what squid
 can do when it comes to return method.

 Only whats documented.

 http://www.squid-cache.org/Doc/config/wccp2_return_method/

 In what circumstances would squid decided to trigger the return
 mechanism currently?  I was looking at the source and I couldn't see
 where this might be implemented.

 One of the reasons I ask is that since I'm using iptables to forward
 things to the local squid port that came to me via WCCP I was
 wondering if it was feasible to take squid out of the loop just by
 changing my iptables rules to reject packets forwarded by WCCP but I
 don't know enough but WCCP return methods to know if it is possible to
 use the return method mechanism to return such packets back to the
 router.

 Can anyone who is knowledgeable about this please help?

 Thanks,




Re: [squid-users] Split caching by size

2009-05-20 Thread Adrian Chadd
Its a per-cache_dir option in Squid-2.7 and above; I'm not sure about 3.



Adrian

2009/5/20 Jason Spegal jspe...@comcast.net:
 Just tested and verified this. At least in Squid 3.0 minimum_object_size
 affects both memory and disk caches. Anyone know if this is true in 3.1 as
 well? Any thoughts as to how to split it? I may be wrong and likely am but I
 recall there was separate minimum_object_size for each cache at one time.

 Chris Robertson wrote:

 Jason Spegal wrote:

 How do I configure squid to only cache small objects, say less than 4mb
 in memory cache,

 http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/

 and only objects larger than 4mb to the disk?

 http://www.squid-cache.org/Doc/config/minimum_object_size/

 I want to optimize the cache based on object size. The reasoning is the
 small stuff will change often and be accessed the most while the larger
 items that tie up bandwidth will not change as often and I can cache more
 aggressively. Also this way I minimize disk io and lag. I am using squid
 3.0. While I can see this being done with the disk cache I am not certain
 the memory cache can be configured like this anymore as the options seem to
 be missing.

 Thanks,
  Jason

 Chris




Re: [squid-users] Internal redirector

2009-06-26 Thread Adrian Chadd
Squid-2.HEAD has some internal rewriting support.

I'm breaking it out into a separate module in Lusca (rather than being
an optional part of the external rewriter) to make using it in
conjunction with the external URL rewriter possible.



Adrian

2009/6/26 Jeff Pang pa...@laposte.net:
 Does squid support internel redirect officially?
 If not, using an external redirector is simple enough.

 #!/usr/bin/perl -wl

 $|=1;   # don't buffer the output

 while () {

        our ($uri,$client,$ident,$method) = split;
        $uri =~ s/\begin=[0-9]*//;

 } continue {
        print $uri;
 }

 2009/6/26 Chudy Fernandez chudy_fernan...@yahoo.com:

 can we use internal redirector(rewrite feature) to replace/remove some 
 regex(\begin=[0-9]*) on URL?

 like..
 http://www.foo.com/video.flvbegin=900
 to
 http://www.foo.com/video.flv







 --
 In this magical land, everywhere
 is in full bloom with flowers of evil.
                     - Jeff Pang (CN)




Re: [squid-users] Squid/PDF

2009-06-26 Thread Adrian Chadd
2009/6/26 Phibee Network Operation Center n...@phibee.net:
 ok the bug are not resolved no ?

The bugs get resolved when someone contributes a fix.. :)



Adrian


Re: [squid-users] Architecture

2009-06-26 Thread Adrian Chadd
2009/6/27 Chris Robertson crobert...@gci.net:

 I'm running a strictly forward proxy setup, which puts an entirely different
 load on the system.  It's also a pretty low load (peaks of 160 req/sec at
 25mbit/sec).

Just another random datapoint - I've just deployed my Squid-2
derivative (which is at least as fast as Squid-2.HEAD) as a forward
proxy on some current generation hardware. It's peaking at 700
requests/sec and ~120mbit a sec with a ~ 30% byte hit rate.

A reverse proxy with a high hit rate should do quite a bit better than that.


Adrian


Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-06-27 Thread Adrian Chadd
Good writeup!

I'm rapidly coming to the conclusion that the problem with
transparency setups is not just a lack of documentation and examples,
but a lack of clear explanation and understanding of what is actually
going on.

I had one user try to manually configure GRE interfaces on the Cisco
side because that is how they thought WCCP worked. Another policy
routed TCP to the proxy and didn't quite get why some connections
where hanging (ICMP doesn't make it to the proxy, so PMTU is
guaranteed to break without blackhole detection in one or more
participants end-nodes/proxy.) Combined with all of the crazy IOS
related bugs and crackery that is going on and I'm not really
surprised the average joe doesn't have much luck. :)

I reckon what would be really, really useful is a writeup of all of
the related technologies involved in all parts of transparent
interception, including a writeup on what WCCPv2 actually is and how
it works; what the various interception options are and do (especially
TPROXY4, which AFAICT is severely lacking in -actual- documentation
about what it is, how it works and how to code for it) so there is at
least a small chance that someone with a bit of clue can easily figure
all the pieces out and debug stuff.

I also see people doing TPROXY4/Linux hackery involving -bridging-
proxies instead of routed/WCCPv2 proxies. That is another fun one.

Finally, figuring out how to tie all of that junk into a cache
hierarchy is also hilariously amusing to get right.

Just for the record, the kernel and iptables binary shipping with the
latest Debian unstable supports TPROXY4 fine. I didn't have to
recompile my kernel or anything - I just had to tweak a few things
(disable pmtu, for example) and add some iptables rules. Oh, and
compile Squid right.

2c,


Adrian


Re: [squid-users] Cache youtube videos WITHOUT videocache?

2009-06-27 Thread Adrian Chadd
2009/7/20 Mark Lodge mlodg...@gmail.com:
 I've come across this at
 http://wiki.squid-cache.org/Features/StoreUrlRewrite

 Feature: Store URL Rewriting?

 Does this mean i can cache videos without using videocache?

That was the intention. Unfortunately, people didn't really pick up on
the power of the feature and have stuck to abusing the redirector API
to serve this kind of content.

The advantage of the redirector approach is that it can bypass all of
the cache rule checking which goes on inside Squid. A lot of these
video (and CDN content sites in general - they charge for content
served! :) make content caching quite difficult if not impossible. The
store URL rewriting scheme also requires a set of refresh patterns to
override the don't cache me please! tags added to content.

I'd love to see a community take on board the store URL rewriter
interface and maintain rulesets for caching youtube, maps, windows
updates, etc. It just doesn't seem like it'll happen.



Adrian


Re: [squid-users] Architecture

2009-06-29 Thread Adrian Chadd
2009/6/30 Ronan Lucio lis...@tiper.com.br:

 Could you tell what hardware do you use?
 Reading Squid-Guide
 (http://www.deckle.co.za/squid-users-guide/Installing_Squid) it says Squid
 isn't CPU intensive, says a multiprocessor machines would not increase speed
 dramatically.


Its a dual quad core amd of some sort. Squid is CPU intensive but
currently only uses 1 CPU for the main application. You'll get
benefits from having multi-core machines but only for offloading
network and disk processing onto them.

 I know this docs is so old, but it talks about machines like a Pentium 133
 with 128 Mb RAM.

 So initially I was thinking in Dual QuadCore + 4Gb RAM. Now I'm thinking in
 a Single QuadCore + 2Gb.

Another Squid rule - as much RAM as possible.

 What do you think about that?

 I think a throughput like yours would be great for me.

 Another question: How many disks do you use?
 In other words: Do I need some special disk strategy to achieve such a
 throughput?

Like anything, your best bet is to test and document the performance.
In this case, its lots of disk on a sensible RAID controller, but no
RAID. I wasn't given time to benchmark RAID vs non-RAID but in this
particular workload, RAID has never ever been faster in my testing in
cases other than the RAID card itself being buggy. Others have a
differing opinion.


Adrian


Re: [squid-users] squid becomes very slow during peak hours

2009-06-30 Thread Adrian Chadd
Upgrade to a later Squid version!



adrian

2009/6/30 goody goody think...@yahoo.com:

 Hi there,

 I am running squid 2.5 on freebsd 7, and my squid box respond very slow 
 during peak hours. my squid machine have twin dual core processors, 4 ram and 
 following hdds.

 Filesystem     Size    Used   Avail Capacity  Mounted on
 /dev/da0s1a    9.7G    241M    8.7G     3%    /
 devfs          1.0K    1.0K      0B   100%    /dev
 /dev/da0s1f     73G     35G     32G    52%    /cache1
 /dev/da0s1g     73G    2.0G     65G     3%    /cache2
 /dev/da0s1e     39G    2.5G     33G     7%    /usr
 /dev/da0s1d     58G    6.4G     47G    12%    /var


 below are the status and settings i have done. i need further guidance to  
 improve the box.

 last pid: 50046;  load averages:  1.02,  1.07,  1.02                          
                               up

 7+20:35:29  15:21:42
 26 processes:  2 running, 24 sleeping
 CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
 Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
 Swap: 4096M Total, 20K Used, 4096M Free

  PID USERNAME      THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
 49819 sbt    1 105    0   360M   351M CPU3   3  92:43 98.14% squid
  487 root            1  96    0  4372K  2052K select 0  57:00  3.47% natd
  646 root            1  96    0 16032K 12192K select 3  54:28  0.00% snmpd
 49821 sbt    1  -4    0  3652K  1048K msgrcv 0   0:13  0.00% diskd
 49822 sbt    1  -4    0  3652K  1048K msgrcv 0   0:10  0.00% diskd
 49864 root            1  96    0  3488K  1536K CPU2   1   0:04  0.00% top
  562 root            1  96    0  3156K  1008K select 0   0:04  0.00% syslogd
  717 root            1   8    0  3184K  1048K nanslp 0   0:02  0.00% cron
 49631 x-man           1  96    0  8384K  2792K select 0   0:01  0.00% sshd
 49635 root            1  20    0  5476K  2360K pause  0   0:00  0.00% csh
 49628 root            1   4    0  8384K  2776K sbwait 1   0:00  0.00% sshd
  710 root            1  96    0  5616K  2172K select 1   0:00  0.00% sshd
 49634 x-man           1   8    0  3592K  1300K wait   1   0:00  0.00% su
 49820 sbt    1  -8    0  1352K   496K piperd 3   0:00  0.00% unlinkd
 49633 x-man           1   8    0  3456K  1280K wait   3   0:00  0.00% sh
  765 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
  766 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
  767 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
  769 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
  771 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
  770 root            1   5    0  3156K   872K ttyin  0   0:00  0.00% getty
  768 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
  772 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
 47303 root            1   8    0  8080K  3560K wait   1   0:00  0.00% squid
  426 root            1  96    0  1888K   420K select 0   0:00  0.00% devd
  146 root            1  20    0  1356K   668K pause  0   0:00  0.00% adjkerntz


 pxy# iostat
      tty             da0            pass0             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95

 pxy# vmstat
  procs      memory      page                    disks     faults      cpu
  r b w     avm    fre   flt  re  pi  po    fr  sr da0 pa0   in   sy   cs us 
 sy id
  1 3 0  458044 103268    12   0   0   0    30   5   0   0  273 1721 2553  4  
 1 95

 pxy# netstat -am
 1376/1414/2790 mbufs in use (current/cache/total)
 1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
 1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
 147/715/862/12800 4k (page size) jumbo clusters in use 
 (current/cache/total/max)
 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
 3360K/5957K/9317K bytes allocated to network (current/cache/total)
 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
 0/7/6656 sfbufs in use (current/peak/max)
 0 requests for sfbufs denied
 0 requests for sfbufs delayed
 0 requests for I/O initiated by sendfile
 0 calls to protocol drain routines


 netstat -an | grep TIME_WAIT | more  command 17 scroll pages of crt.

 some lines from squid.conf
 cache_mem 256 MB
 cache_replacement_policy heap LFUDA
 memory_replacement_policy heap GDSF

 cache_swap_low 80
 cache_swap_high 90

 cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
 cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64

 cache_log /var/log/squid25/cache.log
 cache_access_log /var/log/squid25/access.log
 cache_store_log none

 half_closed_clients off
 maximum_object_size 1024 KB

 pxy# sysctl -a | grep maxproc
 kern.maxproc: 6164
 kern.maxprocperuid: 5547
 kern.ipc.somaxconn: 1024
 kern.maxfiles: 

Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Adrian Chadd
This won't work. You're only redirecting half of the traffic flow with
the wccp web-cache service group. The tproxy code is probably
correctly trying to originate packets -from- the client IP address to
the upstream server but because you're only redirecting half of the
packets (ie, packets from original client to upstream, and not also
the packets from the upstream to the client - and this is the flow
that needs to be hijacked!) things will hang.

You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
setup. There are two service groups configured - 80 and 90 - which
redirect client - server and server-client respectively. They have
the right bits set in the service group definitions to redirect the
traffic correctly.

The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
find the TPROXY2 pages to extract the right WCCPv2 setup to use,
then combine that with the TPROXY4 rules. That is fine for me (I know
a thing or two about this) but it should all be made much, much
clearer for people trying to set this up.

As I suggested earlier, you may wish to consider fleshing out an
interception section in the Wiki complete with explanations about how
all of the various parts of the puzzle hold together.

2c,


adrian

2009/7/2 Alexandre DeAraujo al...@cal.net:
 I am giving this one more try, but have been unsuccessful. Any help is always 
 greatly appreciated.

 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp

 Wan interface (Serial)
 ip wccp web-cache redirect out

 Global WCCP information:
 Router information:
 Router Identifier:                      192.168.20.1
 Protocol Version:                       2.0

 Service Identifier: web-cache
 Number of Service Group Clients:        1
 Number of Service Group Routers:        1
 Total Packets s/w Redirected:   8797
 Process:                                4723
 Fast:                                   0
 CEF:                                    4074
 Redirect access-list:                   11
 Total Packets Denied Redirect:  124925546
 Total Packets Unassigned:               924514
 Group access-list:                      -none-
 Total Messages Denied to Group: 0
 Total Authentication failures:          0
 Total Bypassed Packets Received:        0

 WCCP Client information:
 WCCP Client ID: 192.168.20.2
 Protocol Version:       2.0
 State:                  Usable
 Initial Hash Info:      
                        
 Assigned Hash Info:     
                        
 Hash Allotment: 256 (100.00%)
 Packets s/w Redirected: 306
 Connect Time:           00:21:33
 Bypassed Packets
 Process:                0
 Fast:                   0
 CEF:                    0
 Errors:                 0

 Clients are on FEthernet0/1
 Squid server is the only device on FEthernet0/3
 
 Squid Server:
 eth0      Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D
          inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
          inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

 gre0      Link encap:UNSPEC  HWaddr 
 00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00
          inet addr:192.168.20.2  Mask:255.255.255.248
          UP RUNNING NOARP  MTU:1476  Metric:1
          RX packets:400 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)
 
 /etc/rc.d/rc.local file:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 modprobe ip_gre
 ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
 echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
 
 /etc/sysconfig/iptables file:
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *mangle
 :PREROUTING ACCEPT [166:11172]
 :INPUT ACCEPT [164:8718]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [130:12272]
 :POSTROUTING ACCEPT [130:12272]
 :DIVERT - [0:0]
 -A DIVERT -j MARK --set-xmark 0x1/0x
 -A DIVERT -j ACCEPT
 -A PREROUTING -p tcp -m socket -j DIVERT
 -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip 
 192.168.20.2 --tproxy-mark 0x1/0x1
 COMMIT
 # Completed on Wed Jul  1 03:32:55 2009
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT 

Re: [squid-users] How to do a limit quota download on a Squid proxy

2009-07-10 Thread Adrian Chadd
I had specified how to implement proper quota support for a client -
but the project unfortunately fell through.

Its easy to hook into the end of a HTTP request and mark how much
bandwidth was used. The missing piece is a method of permitting
network access for users so they can't easily access hundreds of
megabytes in a given download. I had outlined another helper process
to allow download quotas in configurable chunks - eg per megabyte.
The other missing piece was to be able to clear all connections from a
given IP or for a given user.

All of these are easy to do if someone has some motivation. :)


Adrian

2009/7/9 tintin_vefg54e654g maf1...@hotmail.fr:

 Hi everyone,

 my configuration is as follow :

 I have a Mandriva 2009.1 OS, with squid ( + sarg, and mrtg) proxy.
 so, in the purpose to keep few free brandwicth for working using ^^ I would
 like to set limits download.

 I don't have users identifications, I see my users by IP adress.
 The matter is I would like to set a quota as a limit of 100Mb per day of
 download per IP adress.
 How is it possible to do such thing.
 is it ? ^^
 if I dare ... most simple way.

 ok, thanks for your help,
 see ya' on the forum

 Tintin


 --
 View this message in context: 
 http://www.nabble.com/How-to-do-a-limit-quota-download-on-a-Squid-proxy-tp24410453p24410453.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] CentOS/Squid/Tproxy but no transfer

2009-07-13 Thread Adrian Chadd
2009/7/14 Amos Jeffries squ...@treenet.co.nz:

 Aha!  duplicate syn-ack is exactly the case I got a good trace of earlier.
 Turned out to be missing config on the cisco box.

Do you have an example of this particular (mis) configuration? The
note in the Wiki article isn't very clear.

 The Features/Tproxy4 wiki page now makes explicit mention of this and
 several possible workarounds.

 The problem seems to be that the WCCP automatic bypass for return traffic
 uses IP, which is not usable under TPROXY. Some other method of traffic
 detection and bypass must be explicitly added for traffic
 Squid-Cisco-Internet. In the old tproxy v2 configs (which still apply)
 the class 90 was used for this.

.. uhm, again, that isn't very clear. automatic bypass isn't
explicitly configured anywhere nor do I see anything in the tproxy2
config which mentions bypass with class 90. So I'm very curious what
exactly it is that people are seeing, with what exact
configuration(s).


Adrian


Re: [squid-users] CentOS/Squid/Tproxy but no transfer

2009-07-13 Thread Adrian Chadd
2009/7/14 Amos Jeffries squ...@treenet.co.nz:

 Do you have an example of this particular (mis) configuration? The
 note in the Wiki article isn't very clear.

 I don't. The admin only mentioned that by adding a bypass on service group
 fixed the issue.
 I had a tcpdump of as set of requests showing pairs of seemingly identical
 requests arriving from the router within 1sec of each other. On deep
 inspection the slightly delayed one showed some minor alterations such as
 Squid makes from the first.

Right. But what was the squid config, cisco config and network
topology for both the doesn't work and works setups?

 If there is any way to make the wiki clearer without wholesale including of
 per-IOS config setting go for it.

Well, it may  boil down to per-IOS config and per-platform, per-IOS
config. The problem is getting some more information to at least
document what is needed.

 The behavior I saw was:

  enable wccpv2 + NAT intercept with wiki config
   == perfectly working, not a sign of any squid-sourced packets.

Right, probably because it was using one service group and the
half-duplex redirection needed for normal, non-tproxy interception was
being done.

  swap NAT for tproxy4 with the wiki config (no change to WCCP or links)
   == loop trace showing squid outward packets coming IN from WCCP.

Yeah that won't work. :)

 So I say seems and appears to be an automatic bypass in WCCP or router
 somewhere. No idea where. may need bypassing manually to fix tproxy.

Well, the automatic bypass should be if the router sees packets from
an IP address or MAC of a registered device, it should be passing it
through. I have no idea whether it is doing this without explicit
don't further redirect rules (eg by deny entries in the redirect
list, or wccp exclude in, etc) because that may absolutely be
platform, IOS and WCCPv2 negotiation type dependant.

So please, poke the admin in question to get as much information about
the configuration and setup of everything.



Adrian


Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
2009/7/14 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
 This is the latest support squid-2 version for RHEL5.3

 An I want to use the dnsserver

Right. Well, besides the other posters' response about the cache peer
setup being a clue - you're choosing a peer based on source IP as far
as I can tell there - which leads me to think that perhaps that
particular cache has a problem. You didn't say which caches they were
in your config or error message so we can't check whether they're the
same or different.

But since yo'ure using a supported squid for RHEL5.3, why don't you
contact Redhat for support? That is why you're paying them for.


adrian


Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
Are you using a url rewriter program?

Also, why haven't you just emailed redhat support?



Adrian

2009/7/15 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
 I found the section which rewrite the request in my cache.log.

 Can someone explain what happens there.

 2009/07/15 06:51:56| cbdataValid: 0x17f684f8
 2009/07/15 06:51:56| redirectHandleRead: {http:/golem.de 10.39.119.9/- - 
 CONNECT}
 2009/07/15 06:51:56| cbdataValid: 0x1808d4a8
 2009/07/15 06:51:56| cbdataUnlock: 0x1808d4a8
 2009/07/15 06:51:56| clientRedirectDone: 'erv-justiz.niedersachsen.de:443' 
 result=http:/golem.de
 2009/07/15 06:51:56| init-ing hdr: 0x1808f160 owner: 1
 2009/07/15 06:51:56| appending hdr: 0x1808f160 += 0x1808ec00
 2009/07/15 06:51:56| created entry 0x17f726f0: 'User-Agent: Mozilla/4.0 
 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; 
 .NET CLR 3.0.04506.30)'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 50 at 0
 2009/07/15 06:51:56| created entry 0x17fcc870: 'Proxy-Connection: Keep-Alive'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 41 at 1
 2009/07/15 06:51:56| created entry 0x17fc3190: 'Content-Length: 0'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 14 at 2
 2009/07/15 06:51:56| created entry 0x17fcbd80: 'Host: 
 erv-justiz.niedersachsen.de'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 27 at 3
 2009/07/15 06:51:56| created entry 0x17fcc990: 'Pragma: no-cache'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 37 at 4
 2009/07/15 06:51:56| 0x1808f160 lookup for 37
 2009/07/15 06:51:56| 0x1808f160: joining for id 37
 2009/07/15 06:51:56| 0x1808f160: joined for id 37: no-cache
 2009/07/15 06:51:56| 0x1808f160 lookup for 7
 2009/07/15 06:51:56| 0x1808f160 lookup for 7
 2009/07/15 06:51:56| 0x1808f160 lookup for 40
 2009/07/15 06:51:56| 0x1808f160 lookup for 52
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_NOCACHE = SET
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_CACHABLE = NOT SET
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_HIERARCHICAL = NOT SET
 2009/07/15 06:51:56| clientProcessRequest: CONNECT 
 'http.justiz.niedersachsen.de:443'
 2009/07/15 06:51:56| aclCheckFast: list: (nil)
 2009/07/15 06:51:56| aclCheckFast: no matches, returning: 1
 2009/07/15 06:51:56| sslStart: 'CONNECT http.justiz.niedersachsen.de:443'
 2009/07/15 06:51:56| comm_open: FD 58 is a new socket
 2009/07/15 06:51:56| fd_open FD 58 http.justiz.niedersachsen.de:443
 2009/07/15 06:51:56| comm_add_close_handler: FD 58, handler=0x463e31, 
 data=0x1808d378

 -Ursprüngliche Nachricht-
 Von: Jarosch, Ralph [mailto:ralph.jaro...@justiz.niedersachsen.de]
 Gesendet: Dienstag, 14. Juli 2009 11:40
 An: squid-users@squid-cache.org
 Betreff: AW: [squid-users] https from different Subnet not working

  -Ursprüngliche Nachricht-
  Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
 Auftrag
  von Adrian Chadd
  Gesendet: Dienstag, 14. Juli 2009 11:16
  An: Jarosch, Ralph
  Cc: squid-users@squid-cache.org
  Betreff: Re: [squid-users] https from different Subnet not working
 
  2009/7/14 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
   This is the latest support squid-2 version for RHEL5.3
  
   An I want to use the dnsserver
 
  Right. Well, besides the other posters' response about the cache peer
  setup being a clue - you're choosing a peer based on source IP as far
  as I can tell there - which leads me to think that perhaps that
  particular cache has a problem. You didn't say which caches they were
  in your config or error message so we can't check whether they're the
  same or different.
 
 Ok sorry.
 The current way for an website request is

 Client -- headproxy(10.37.132.2) -- my cache proxys
 (10.37.132.5/6/7/8) -- proxy off our isp -- internet

 The error message come from the isp proxy which tell when I request
 something like https://www.ebay.com

  The requested URL could not be retrieved
  --
  -- While trying to retrieve the URL: http.yyy.xxx:443 The
       yyy.xxx is our local domain
  following error was encountered:
  Unable to determine IP address from host name for The dnsserver
  returned:
  Name Error: The domain name does not exist.
  This means that:
   The cache was not able to resolve the hostname presented in the URL.
   Check if the address is correct.
  Your cache administrator is webmaster.
  --
  -- Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx
       the answer come from the isp
  (squid/2.5.STABLE12)

 I´ve made an tcpdump between our headproxy and our cacheproxy´s  an
 there I can see that the headproxy change the request from
 https//www.ebay.com to https.our.domain.com



  But since yo'ure using a supported squid for RHEL5.3, why don't you
  contact Redhat for support? That is why you're paying them for.
 
 
  adrian




Re: [squid-users] Architecture for scaling delivery of large static files

2009-07-15 Thread Adrian Chadd
2009/7/16 Jamie Tufnell die...@googlemail.com:

 We are talking files up-to-1GB in size here.  Taking that into
 consideration, would you still recommend this architecture?

On disk? Sure. The disk buffer cache helps quite a bit.

In memory ? (as in, the squid hot object cache; not the buffer cache)
? Not without investing some time into the Squid-2 fixes to do it.
I've toyed with it before and its reasonably easy to fix without
hurting performance.

2c,


Adrian


Re: [squid-users] Architecture for scaling delivery of large static files

2009-07-16 Thread Adrian Chadd
I was going to say; I'm tweaking the performance of a cache with 21
million objects in it now. Thats a bti bigger than 2^24.

2009/7/16 Henrik Nordstrom hen...@henriknordstrom.net:
 tor 2009-07-16 klockan 14:29 +1200 skrev Amos Jeffries:

 For you with MB-GB files in Squid-2 that changes to faster Squid due to
 limiting RAM-cache to small files, with lots of large fast disks. Squid is
 limited to a few million (2^24) cache _objects_

 per cache_dir, and up to 32 (2^6) cache_dir.

 Regards
 Henrik




Re: [squid-users] rep_mime_type is evaluated before content has been reached ?

2009-07-21 Thread Adrian Chadd
2009/7/21 Soporte Técnico @lemNet sopo...@nodoalem.com.ar:
 rep_mime_type can´t be used for parent selection because this is evaluated
 before content has been reached ?

Correct.



Adrian


Re: AW: AW: AW: AW: AW: [squid-users] Squid 3.1.0.11 beta is available

2009-07-21 Thread Adrian Chadd
Just break on SIGABRT and SIGSEGV. The actual place in the code where
things failed will be slightly further up the callstack than the break
point but it -will- be triggered.

Just remember to ignore SIGPIPE's or you'll have a strangely failing Squid. :)



adrian

2009/7/21 Marcus Kool marcus.k...@urlfilterdb.com:
 my 2 cents:
 someone needs to explain how to set a breakpoint
 because when the assertion fails, the program exits
 (see previous emails: Program exited with code 01)
 The question is where to set the breakpoint
 but probably Amos knows where to set it.

 Marcus


 Silamael wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Zeller, Jan wrote:

 Hi Amos,

 I now explicitly enabled
 --enable-stacktraces Enable automatic call backtrace on fatal errors

 during the build and added CFLAGS=-g -ggdb in front of ./configure but
 the result seems to be the same...

 # ./squid -v
 Squid Cache: Version 3.1.0.11
 configure options:  '--prefix=/opt/squid-3.1.0.11' '--enable-icap-client'
 '--enable-ssl' '--enable-linux-netfilter' '--disable-ipv6'
 '--disable-translation' '--disable-auto-locale' '--with-pthreads'
 '--with-filedescriptors=32768' '--enable-stacktraces' 'CFLAGS=-g -ggdb'
 --with-squid=/usr/local/src/squid-3.1.0.11 --enable-ltdl-convenience
 2009/07/21 15:43:50| assertion failed: mem.cc:236: size ==
 StrPoolsAttrs[i].obj_size
 Aborted

 # gdb --args ./squid -NCXd9
 GNU gdb 6.8-debian
 Copyright (C) 2008 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later
 http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show
 copying
 and show warranty for details.
 This GDB was configured as x86_64-linux-gnu...
 (gdb) bt
 No stack.
 (gdb) quit


 You forgot to tell gdb to run the program.
 # gdb --args ./squid -NCXd9
 start gdb and tell it to use -NCXd9 as arguments for squid
 When you get the gdb prompt, enter:
 (gdb) r
 which will run squid. When it crashes you type
 (gdb) bt
 to get the backtrace. If squid does not crash, typing bt is pretty
 useless. Same, if it even didn't run before ;)

 - -- Matthias
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkpl0pYACgkQGgHcOSur6dRRagCfQpDfLaFqg1mLwJCVTcAUJRWP
 R+oAn2LnoLTxNJV6+YX+Q8Ja8ILUHayl
 =JhHL
 -END PGP SIGNATURE-






Re: [squid-users] Re: TCp_HIT problem

2009-07-25 Thread Adrian Chadd
2009/7/25 Amos Jeffries squ...@treenet.co.nz:

 ?? looks like your problem. Most of the web traffic you will ever see is
 under 2 MB big.
 Average size is somewhere between 32KB and 128KB depending on your clients.

Weird; my largest proxy customer with around 15,000 users or so now
behind one proxy has a different traffic distribution. 99% of requests
are under 64k, but over half the traffic is for objects above 8
megabytes. I've told Lusca to cache objects up to around 1900mbytes in
size and I so far have seen hits for objects up to a gigabyte.

(I'll publish actual stats when the client gives me the green light.)

2c,


Adrian


Re: [squid-users] Caching Pandora

2009-07-25 Thread Adrian Chadd
2009/7/26 Jason Spegal jspe...@comcast.net:
 I was able to cache Pandora by compiling with --enable-http-violations and
 using a refresh_pattern to cache everything regardless. This however broke
 everything by preventing proper refreshing of any site. If it could be
 worked where violations only happened as directly specified in the
 configuration it would be a workable solution. I did some testing and I
 could not confirm that it was anything in the configuration file itself that
 was causing the issue. I wouldn't recommend using this as such.

Perhaps you could email them and ask why they've made their content uncachable?

Having cachable video content on websites will make them much, much
less likely to begin being blocked by bandwidth-strapped end-sites. :)



Adrian


Re: [squid-users] Caching Pandora

2009-07-25 Thread Adrian Chadd
This doesn't surprise me. They may be trying to maximise outbound
bits, or try to retain control over content, or not understanding
caching, or all/combination of the above.

I'd suggest contacting them and asking.




adrian

2009/7/26 Jason Spegal jspe...@comcast.net:
 A little bit messy but here are some snippets.

 ###Access.log

 1248572380.275    178 10.10.122.248 TCP_REFRESH_UNMODIFIED/304 232 GET
 http://images-sjl-1.pandora.com/images/public/amz/1/2/0/4/727361124021_500W_495H.jpg
 - DIRECT/208.85.40.13 -
 1248572409.144   8472 10.10.122.241 TCP_MISS/200 1581181 GET
 http://audio-sjl-t3-2.pandora.com/access/7008639604707703825.mp4? -
 DIRECT/208.85.41.38 application/octet-stream
 1248572439.512     94 10.10.122.241 TCP_MEM_HIT/200 55396 GET
 http://images-sjl-2.pandora.com/images/public/amz/3/0/2/3/602498413203_500W_499H.jpg
 - NONE/- image/jpeg
 1248572570.898    300 10.10.122.248 TCP_MISS/200 6521 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 - DIRECT/208.85.41.23 image/jpeg
 1248572600.538  29937 10.10.122.248 TCP_MISS/200 7704188 GET
 http://audio-sjl-t3-2.pandora.com/access/3642267922875646389.mp3? -
 DIRECT/208.85.41.38 application/octet-stream
 1248572615.735  11507 10.10.122.241 TCP_MISS/200 2109481 GET
 http://audio-sjl-t2-2.pandora.com/access/5722981497105294607.mp4? -
 DIRECT/208.85.41.36 application/octet-stream
 1248572635.903    179 10.10.122.248 TCP_REFRESH_UNMODIFIED/304 232 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 - DIRECT/208.85.41.23 -
 1248572641.444     40 10.10.122.241 TCP_HIT/200 21616 GET
 http://images-sjl-2.pandora.com/images/public/amz/8/7/6/1/602498611678_300W_273H.jpg
 - NONE/- image/jpeg

 ###Store.log

 1248572380.275 RELEASE -1  097EAE1108DCEF192ED1C3BFF1F6C1B5  304
 1248572380        -1        -1 unknown -1/0 GET
 http://images-sjl-1.pandora.com/images/public/amz/1/2/0/4/727361124021_500W_495H.jpg
 1248572409.144 RELEASE -1  6B93B1BF958703B3FC3CD1ADDD515695  200
 1248572400        -1 1248572400 application/octet-stream 1580815/1580815 GET
 http://audio-sjl-t3-2.pandora.com/access/7008639604707703825.mp4?
 1248572570.897 SWAPOUT 00 0004CF23 BEEE111A39B596B14903743011AF2C36  200
 1248572570 1248490006        -1 image/jpeg 6181/6181 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 1248572600.538 RELEASE -1  070416ED935AD18DCA793569D2C6A652  200
 1248572570        -1 1248572570 application/octet-stream 7703822/7703822 GET
 http://audio-sjl-t3-2.pandora.com/access/3642267922875646389.mp3?
 1248572615.735 RELEASE -1  B0EB42B39131DF028BA3BE9A39CC24E4  200
 1248572604        -1 1248572604 application/octet-stream 2109115/2109115 GET
 http://audio-sjl-t2-2.pandora.com/access/5722981497105294607.mp4?
 1248572635.903 RELEASE -1  CDCA0D3510080D121E5578310976676E  304
 1248572635        -1        -1 unknown -1/0 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 1248572886.822 RELEASE -1  A95C86074129546301911C2FC251071D  200
 1248572872        -1 1248572872 application/octet-stream 2086824/2086824 GET
 http://audio-sjl-t1-1.pandora.com/access/5188159311574708305.mp4?

 ###Wireshark

 Hypertext Transfer Protocol
 HTTP/1.0 200 OK\r\n
 Date: Sun, 26 Jul 2009 05:12:58 GMT\r\n
 Server: Apache\r\n
 Content-Length: 6137729\r\n
 Cache-Control: no-cache, no-store, must-revalidate, max-age=-1\r\n
 Pragma: no-cache, no-store\r\n
 Expires: -1\r\n
 Content-Type: application/octet-stream\r\n
 X-Cache: MISS from ichiban\r\n
 X-Cache-Lookup: MISS from ichiban:3128\r\n
 Via: 1.0 ichiban (squid)\r\n
 Proxy-Connection: keep-alive\r\n
 \r\n

 mos Jeffries wrote:

 Jason Spegal wrote:

 I was able to cache Pandora by compiling with --enable-http-violations
 and using a refresh_pattern to cache everything regardless. This however
 broke everything by preventing proper refreshing of any site. If it could be
 worked where violations only happened as directly specified in the
 configuration it would be a workable solution. I did some testing and I
 could not confirm that it was anything in the configuration file itself that
 was causing the issue. I wouldn't recommend using this as such.


 Which indicates that there are fine tuning possible to cache just Pandora.
 Find yoursef one of the Pandora URLs in your access.log and take a visit to
 www.redbot.org or the ircache.org cacheability engine.


 Amos




 Henrik Nordstrom wrote:

 lör 2009-07-25 klockan 12:05 -0600 skrev Brett Glass:


 One of the largest consumers of our HTTP bandwidth is Pandora, the free
 music service. Unfortunately, Pandora marks its streams as non-cacheable 
 and
 also puts question marks in the URLs, which is a huge waste of bandwidth.
 How can this be overridden?


 The questionmark can be ignored. See the cache directive. But if there
 is other parameters behind there (normally not logged) that just may 

Re: [squid-users] High CPU utilization

2009-07-27 Thread Adrian Chadd
Change ufs to aufs - assuming you compiled in aufs.

Consider upgrading to Squid-2.7.STABLEx - I did a whole lot of little
performance tweaks between 2.6 and 2.7.

Learn about oprofile and submit some performance information to help
developers. :)



Adrian

2009/7/28 jotacekm minu...@viaip.com.br:

 Hello.
 Recently we have added a lot more clientes behind a squid proxy, and now cpu
 utilizitaion is usually 70-95%. The processor is a intel dual core 2160  @
 1.80GHz. Users started complaing about the speed on accessing pages, and the
 link is fine.

 Here is squidclient mgr:info:

 Squid Object Cache: Version 2.6.STABLE5
 Start Time:     Fri, 24 Jul 2009 20:19:16 GMT
 Current Time:   Mon, 27 Jul 2009 19:09:07 GMT
 Connection information for squid:
        Number of clients accessing cache:      1561
        Number of HTTP requests received:       8590404
        Number of ICP messages received:        0
        Number of ICP messages sent:    0
        Number of queued ICP replies:   0
        Number of HTCP messages received:       0
        Number of HTCP messages sent:   0
        Request failure ratio:   0.00
        Average HTTP requests per minute since start:   2021.3
        Average ICP messages per minute since start:    0.0
        Select loop called: 122560206 times, 2.081 ms avg
 Cache information for squid:
        Request Hit Ratios:     5min: 23.0%, 60min: 21.9%
        Byte Hit Ratios:        5min: 13.3%, 60min: 14.7%
        Request Memory Hit Ratios:      5min: 16.3%, 60min: 17.0%
        Request Disk Hit Ratios:        5min: 26.3%, 60min: 29.9%
        Storage Swap size:      4609784 KB
        Storage Mem size:       65692 KB
        Mean Object Size:       15.67 KB
        Requests given to unlinkd:      455490
 Median Service Times (seconds)  5 min    60 min:
        HTTP Requests (All):   1.24267  1.46131
        Cache Misses:          1.54242  1.81376
        Cache Hits:            0.28853  0.30459
        Near Hits:             1.31166  1.62803
        Not-Modified Replies:  0.23230  0.23230
        DNS Lookups:           0.29097  0.31806
        ICP Queries:           0.0  0.0
 Resource usage for squid:
        UP Time:        254991.358 seconds
        CPU Time:       56029.670 seconds
        CPU Usage:      21.97%
        CPU Usage, 5 minute avg:        94.38%
        CPU Usage, 60 minute avg:       93.36%
        Process Data Segment Size via sbrk(): 193008 KB
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 70
 Memory usage for squid via mallinfo():
        Total space in arena:  193008 KB
        Ordinary blocks:       179871 KB   6058 blks
        Small blocks:               0 KB      0 blks
        Holding blocks:          1080 KB      2 blks
        Free Small blocks:          0 KB
        Free Ordinary blocks:   13136 KB
        Total in use:          180951 KB 93%
        Total free:             13136 KB 7%
        Total size:            194088 KB
 Memory accounted for:
        Total accounted:       111416 KB
        memPoolAlloc calls: 973288506
        memPoolFree calls: 972077571
 File descriptor usage for squid:
        Maximum number of file descriptors:   4096
        Largest file desc currently in use:   1604
        Number of file desc currently in use: 1308
        Files queued for open:                   0
        Available number of file descriptors: 2788
        Reserved number of file descriptors:   100
        Store Disk files open:                  30
        IO loop method:                     epoll
 Internal Data Structures:
        300182 StoreEntries
         14733 StoreEntries with MemObjects
         14414 Hot Object Cache Items
        294106 on-disk objects

 And here is part of of squid.conf:

 http_port 3128
 visible_hostname xxx
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 access_log /var/log/squid/access.log squid
 cache_store_log none
 hosts_file /etc/hosts


 #
 --
 cache_mem 64 MB
 cache_dir ufs /var/spool/squid 5000 50 256
 cache_replacement_policy heap LFUDA
 maximum_object_size 51200 KB
 maximum_object_size_in_memory 64 KB
 memory_replacement_policy heap GDSF
 logfile_rotate 3


 #
 --
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern .               0       20%     4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8



 max_open_disk_fds 2046

 # timeouts
 connect_timeout 30 seconds
 shutdown_lifetime 5 seconds
 forward_timeout 2 minutes
 pconn_timeout 30 seconds
 persistent_request_timeout 1 minute
 request_timeout 2 

Re: [squid-users] Donate section not update

2009-08-01 Thread Adrian Chadd
The donations were always few and far between. I'm not sure if there's
been any real active donations in the last twelve months; I think only
Duane knows.


Adrian

2009/8/2 Juan C. Crespo R. jcre...@ifxnw.com.ve:
 Guys

   Checking the site I found there is no donation from December 2008, or its
 an error in the page?, because no donation its like no one cares about this
 project and that could not be possible because I see a lot of people
 complaining and asking for features, a errors resolutions, I include myself
 in this group.

 Regards.




Re: [squid-users] Does squid support multithreading ?

2009-08-02 Thread Adrian Chadd
2009/8/2 Sachin Malave sachinmal...@gmail.com:
 I have multicore processor here, I want to run squid3 on this
 platform, Does squid support multithreading ? will it improve the
 performance ?

None of the public Squid codebases currently support general
multithreading. There's some threading for IO but that is it.

The only support is some magic support for sharing the same incoming
HTTP socket between multiple, separate squid processes.

If you care about performance, Squid-2.7 is probably the best for you
at the moment from the Squid codebases..


Adrian


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed them)
 The ip_conntrack and hash size were low (now 524288 262144 respectively)

 Now we are at a point that IO is the only issue.

What profiling have you done to support that? For example, one of the
issues I had which looked like IO performance was actually because the
controller was completely unhappy. Upgrading the firmware on the
controller card signficantly increased performance.

But I think you need to post some further information about the
problem. IO can be rooted in a lot of issues. :)


Adrian


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Are you seeing high IO wait CPU use, or high IO wait times on IO?



Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 Well my conclusion that this is an IO problem came from the fact that I see
 huge IO waits as the volume of traffic increase (with tools such as mpstat),
 when using ramdisk there is no such issue.
 I have configured the SSD drive with ext2, no journal, noatime. Used the
 “noop” I/O scheduler.
 In /etc/fstab
 /dev/sdb1               /cache                  ext2 defaults,noatime 1 2

 hdparm results:
 hdparm -t /dev/sdb1

 /dev/sdb1:
  Timing buffered disk reads:  304 MB in  3.01 seconds = 100.93 MB/sec
 
 hdparm -T /dev/sdb1

 /dev/sdb1:
  Timing cached reads:   4192 MB in  2.00 seconds = 2096.58 MB/sec

 Any ideas?

 Regards.



 Adrian Chadd-3 wrote:

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of
 variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed
 them)
 The ip_conntrack and hash size were low (now 524288 262144 respectively)

 Now we are at a point that IO is the only issue.

 What profiling have you done to support that? For example, one of the
 issues I had which looked like IO performance was actually because the
 controller was completely unhappy. Upgrading the firmware on the
 controller card signficantly increased performance.

 But I think you need to post some further information about the
 problem. IO can be rooted in a lot of issues. :)


 Adrian



 --
 View this message in context: 
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776193.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Well, from what I've read, SSDs don't necessarily provide very high
random write throughput over time. You should do some further research
into how they operate to understand what the issues may be.

In any case, the much more important information is what IO pattern(s)
are occuring on your storage media and what the controller is doing
with it. You still haven't eliminated the possibility that the
controller/driver is somehow not helping.

You should also graph at least read/write IO count and byte counts;
investigate what is going on.

2c,



Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Dear Waitman,


 Testing the SSD drive, before installing it on the squid, showed huge
 performance advantage in IOPS, read/write. So, I thought that this will
 solve the problems I had with HDD.
 But it was not so, look at this output:
 12:39:35 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 12:39:37 PM  all    2.87    0.00    2.25   44.44    0.12    3.50    0.00
 46.82  11666.50
 12:39:37 PM    0    0.00    0.00    0.00    0.00    0.50    4.50    0.00
 95.00   4764.00
 12:39:37 PM    1    0.00    0.00    0.50    4.98    0.00    2.49    0.00
 92.04   2097.50
 12:39:37 PM    2   11.56    0.00    8.54   76.88    0.00    3.02    0.00
 0.00   1977.50
 12:39:37 PM    3    0.50    0.00    0.00   95.52    0.50    3.48    0.00
 0.00   2827.50

 This is a moment before the system went down, the IO is up high.


 Waitman Gobble-2 wrote:


 smaugadi wrote:
 Dear ALL,
 We have a squid server with high volume of traffic, 200 – 300 MB


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Generally large amounts of CPU being spent in IO wait means that the
driver is not well-written or the hardware requires extra upkeep to
handle IO operations.

What hardware in particular are you using?

This was one of those big differences between IDE and SATA in the past
btw. At least under Linux in the distant past, a lot of the IDE
drivers would have to manually transfer the data using PIO rather than
having a bus-master DMA transfer occur like many SCSI cards did. This
was counted to IO wait.

Investigate what your storage driver is doing. :)

HTH,


Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Well I'm seeing that the CPU is taking a lot of time waiting for outstanding
 disk I/O request.
 Adi

 Adrian Chadd-3 wrote:

 Are you seeing high IO wait CPU use, or high IO wait times on IO?



 Adrian

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 Well my conclusion that this is an IO problem came from the fact that I
 see
 huge IO waits as the volume of traffic increase (with tools such as
 mpstat),
 when using ramdisk there is no such issue.
 I have configured the SSD drive with ext2, no journal, noatime. Used the
 “noop” I/O scheduler.
 In /etc/fstab
 /dev/sdb1               /cache                  ext2 defaults,noatime 1 2

 hdparm results:
 hdparm -t /dev/sdb1

 /dev/sdb1:
  Timing buffered disk reads:  304 MB in  3.01 seconds = 100.93 MB/sec
 
 hdparm -T /dev/sdb1

 /dev/sdb1:
  Timing cached reads:   4192 MB in  2.00 seconds = 2096.58 MB/sec

 Any ideas?

 Regards.



 Adrian Chadd-3 wrote:

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of
 variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed
 them)
 The ip_conntrack and hash size were low (now 524288 262144
 respectively)

 Now we are at a point that IO is the only issue.

 What profiling have you done to support that? For example, one of the
 issues I had which looked like IO performance was actually because the
 controller was completely unhappy. Upgrading the firmware on the
 controller card signficantly increased performance.

 But I think you need to post some further information about the
 problem. IO can be rooted in a lot of issues. :)


 Adrian



 --
 View this message in context:
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776193.html
 Sent from the Squid - Users mailing list archive at Nabble.com.





 --
 View this message in context: 
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776478.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
2009/8/2 Heinz Diehl h...@fancy-poultry.org:

 1. Change cache_dir in squid from ufs to aufs.

That is almost always a good idea for any decent performance under any
sort of concurrent load. I'd like proof otherwise - if one finds it,
it indicates something which should be fixed.

 2. Format /dev/sdb1 with mkfs.xfs -f -l lazy-count=1,version=2 -i attr=2 -d 
 agcount=4
 3. Mount it afterwards using rw,noatime,logbsize=256k,logbufs=2,nobarrier 
 in fstab.

 4. Use cfq as the standard scheduler with the linux kernel

Just out of curiousity, why these settings? Do you have any research
which shows this?

 (Btw: on my systems, squid-2.7 is noticeably _a lot_ slower than squid-3,
 if the object is not in cache...)

This is an interesting statement. I can't think of any specific reason
why there should be any particular reason squid-2.7 performs worse
than Squid-3 in this instance. This is the kind of works by magic
stuff which deserves investigation so the issue(s) can be fully
understood. Otherwise you may find that a regression creeps up in
later Squid-3 versions because all of the issues weren't fully
understood and documented, and some coder makes a change which they
think won't have as much of an effect as it does. It has certainly
happened before in squid. :)

So, more information please.



Adrian


Re: [squid-users] Way to hide Caching Server IP

2009-08-03 Thread Adrian Chadd
Investigate tproxy



Adrian

2009/8/4 Ja-Ryeong Koo wjb...@gmail.com:
 Hello,

 I am writing this email to ask something regarding ways to hide Caching
 Server IP address.

 I have one apache server, one caching server (squid2.6.stable22).
 (Client -- Caching Server (Reverse Proxy)  Apache Server)

 Now, whenever I try to connect apache server, both the Caching server IP and
 Client IP (my PC ip address) are seen on the Apache server.

 I hope that the apache server only can see client IP address.

 Please let me know if you have any kinds of ways to do this.

 In advance, thank you for your kind consideration.

 Best Regards,
 Ja-Ryeong Koo

 --
 Ja-Ryeong Koo,
 Department of Computer Science,
 Texas AM University-College Station,
 TX, 77843-3112, USA,
 Phone: +1-979-204-8021



Re: [squid-users] Re: [new] videocache question

2009-08-04 Thread Adrian Chadd
Is this still involving the videocache stuff?

If it is, why aren't you asking them?



Adrian

2009/8/4 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ mirz...@gmail.com:
 (repost)
 and how about caching online game patcher ? e.g ragnarok online, rohan
 online, etc ?
 is that  use same method ?

 and can anyone give me example about this cache streaming ?
 cause:
 r...@server:/home/mirza# tail -f /var/log/squid/store.log
 1249364647.522 RELEASE -1  94E0DA2780D918D4AC808CBCF54144CC
 200 1249364644        -1 1249364644 text/html -1/952 GET
 http://openx.detik.com/delivery/afr.php?n=a7157323zoneid=349cb=INSERT_RANDOM_NUMBER_HERE
 1249364647.745 RELEASE -1  7F3A99532B8A6CEA7D846D4F0AE2E6AE
 200 1249364647        -1 1249364647 image/gif 43/43 GET
 http://openx.detik.com/delivery/lg.php?bannerid=2167campaignid=1043zoneid=349loc=http%3A%2F%2Fwww.detiknews.com%2Fread%2F2009%2F08%2F04%2F123104%2F1177031%2F10%2Fjenazah-mbah-surip-siap-dimandikan-ibu-ibu-bacakan-surat-yasincb=1b1801337b

 always RELEASE

 On Mon, Aug 3, 2009 at 12:44 AM, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ
 ▓▒░mirz...@gmail.com wrote:
 ok amos

 anyone ? have any idea bout this prob ?

 On Sun, Aug 2, 2009 at 7:59 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

 On Sun, Aug 2, 2009 at 7:55 AM, Amos Jeffriessqu...@treenet.co.nz wrote:
 .

 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

 im using 2.x ( latest )

 anyone can help ?

 Are we to assume by latest 2.x you mean 2.7 or merely the latest
 available
 in your unknown operating system?
 Hint: 'latest 2.x' for RedHat and several others is 2.5. Obsolete many
 years
 ago.

 If you did mean 2.7, then the example there and in related Discussion
 page
 is the best you are going to get right now. They even provide a useful
 helper script to do the URL mapping.


 Amos
 --

 i use latest from ubuntu
 yes it is 2.7

 and how about caching online game patcher ? e.g ragnarok online, rohan
 online, etc ?
 is that  use same method ?

 I can't say. I have not seen those game patchers in operation.
 The steam game patcher shows some promise though for certain of its
 operations.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE17
  Current Beta Squid 3.1.0.12




 --
 -=-=-=-=
 Personal Blog http://my.blog.or.id ( lagi belajar )
 Hot News !!! : Pengin punya Layanan SMS PREMIUM ? Contact me ASAP.
 dapatkan Share revenue MAXIMAL tanpa syarat traffic...




 --
 -=-=-=-=
 Personal Blog http://my.blog.or.id ( lagi belajar )
 Hot News !!! : Pengin punya Layanan SMS PREMIUM ? Contact me ASAP.
 dapatkan Share revenue MAXIMAL tanpa syarat traffic...




Re: Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-04 Thread Adrian Chadd
How much disk IO is going on when the CPU shows 70% IOWAIT? Far too
much. The CPU time spent in CPU IOWAIT shouldn't be that high. I think
you really should consider trying an alternative disk controller.




adrian

2009/8/4 smaugadi a...@binat.net.il:

 Dear Adrian and Heinz,
 Sorry for the delayed replay and thanks for all the help so far.
 I have tried changing the file system (ext2 and ext3), changed the
 partitioning geometry (fdisk -H 224 -S 56) as I read that this would improve
 performance with SSD.
 I tried ufs, aufs and even coss (downgrade to 2.6). (By the way the average
 object size is 13KB).
 And failed!

 From system monitoring during the squid degradation I saw:

 /usr/local/bin/iostat -dk -x 1 1000 sdb
 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    72.00    36.00
 155.13 25209.75 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    16.00     8.00
 151.50 26265.50 250.50 100.20

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    3.00     0.00    12.00     8.00
 147.49 27211.33 333.33 100.00

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    32.00    16.00
 144.54 28311.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00   100.00    50.00
 140.93 29410.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    36.00    18.00
 137.00 30411.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    2.00     0.00     8.00     8.00
 133.29 31252.50 500.50 100.10

 As soon as the service time increases above 200MS problems start, also the
 total time for service (time in queue + service time) goes all the way to 32
 sec.

 This is from mpstat at the same time:

 09:33:56 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:33:58 AM  all    3.00    0.00    2.25   84.02    0.12    2.75    0.00
 7.87   9782.00
 09:33:58 AM    0    3.98    0.00    2.99   72.64    0.00    3.98    0.00
 16.42   3971.00
 09:33:58 AM    1    2.01    0.00    1.01   80.40    0.00    1.51    0.00
 15.08   1542.00
 09:33:58 AM    2    2.51    0.00    2.01   92.96    0.00    2.51    0.00
 0.00   1763.50
 09:33:58 AM    3    3.02    0.00    3.02   90.95    0.00    3.02    0.00
 0.00   2506.00

 09:33:58 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:00 AM  all    0.50    0.00    0.25   74.12    0.00    0.62    0.00
 24.50   3833.50
 09:34:00 AM    0    0.50    0.00    0.50    0.00    0.00    1.00    0.00
 98.00   2015.00
 09:34:00 AM    1    0.50    0.00    0.00   98.51    0.00    1.00    0.00
 0.00    544.50
 09:34:00 AM    2    0.50    0.00    0.00   99.50    0.00    0.00    0.00
 0.00    507.00
 09:34:00 AM    3    0.50    0.00    0.00   99.00    0.00    0.50    0.00
 0.00    766.50

 09:34:00 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:02 AM  all    0.12    0.00    0.25   74.53    0.00    0.12    0.00
 24.97   1751.50
 09:34:02 AM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00
 100.00   1155.50
 09:34:02 AM    1    0.00    0.00    0.50   99.50    0.00    0.00    0.00
 0.00    230.50
 09:34:02 AM    2    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    220.00
 09:34:02 AM    3    0.00    0.00    0.50   99.50    0.00    0.00    0.00
 0.00    146.00

 09:34:02 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:04 AM  all    1.25    0.00    1.50   74.97    0.00    0.00    0.00
 22.28   1607.50
 09:34:04 AM    0    5.47    0.00    5.47    0.00    0.00    0.00    0.00
 89.05   1126.00
 09:34:04 AM    1    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    158.50
 09:34:04 AM    2    0.00    0.00    0.50   98.51    0.50    0.50    0.00
 0.00    175.50
 09:34:04 AM    3    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    147.00

 Well, some times you eat the bear and some times the bears eat you.

 Do you have any more ideas?
 Regards,
 Adi.




 Adrian Chadd-3 wrote:

 2009/8/2 Heinz Diehl h...@fancy-poultry.org:

 1. Change cache_dir in squid from ufs to aufs.

 That is almost always a good idea for any decent performance under any
 sort of concurrent load. I'd like proof otherwise - if one finds

Re: [squid-users] New Accel Reverse Proxy Cache is not caching everything... how to force?

2009-08-04 Thread Adrian Chadd
2009/8/4 Hery Setiawan yellowha...@gmail.com:

 maybe in his mind (and my mind too actually), with big mem_cache the
 file transfer will be transferred faster. But using that big for me
 it's too much, since I only have 4GB of RAM and thousand workstation
 connect with my squid.

The squid memory cache doesn't work the way people seem to think it
does. Once objects leave the memory cache pool they're out for good.

The rule of thumb is quite simple - keep cache_mem large enough to
handle in-transit objects and a few hot objects; leave the rest to be
available for general operating system disk buffer caching. Only
deviate from this if you absolutely, positively require it.

This will occur when your workload has a lot of small objects which
you frequently hit. Hack up or download something to generate a
request size vs {hit rate, byte hit rate, service time, cumulative
traffic} to see exactly how many tiny/small objects you're getting a
hit off of.

If you have a very small set of constantly hot traffic that will fit
in memory, up cache_mem. But be aware of the performance repercussions
if the hot traffic leaves cache_mem and stays on disk.. :)

If you have a set of hot traffic that moves over time, upping
cache_mem may not help.

2c,



Adrian


Re: [squid-users] proxy: explicit transparent + VideoCache

2009-08-06 Thread Adrian Chadd
Have you asked the videocache group why it functions the way it functions?



adrian

2009/8/6 pavel kolodin pavelkolo...@gmail.com:
 On Thu, 06 Aug 2009 05:34:09 -, Amos Jeffries squ...@treenet.co.nz
 wrote:


 Why?

 Possible reasons:

 1) 302 being the status you really want to use for this.

 2) transparent proxy aka intercepting proxy aka man-in-middle attack
 perhapse the plugin is smart enough to detect such attacks and prevent
 them
 from working.

 3) perhapse the plugins is simply smart enough to realize it will never
 get
 a redirect back from the real source.

 4) perhapse the browser does not have access to the new location.

 5) perhapse the browser is limiting the sources the plugin may connect to.
 raw-IP addresses are known to be dangerous.

 Browser (or flash plugin in browser) even don't trying to send request to
 10.10.10.1 if proxy is transparent.




Re: [squid-users] Script Check

2009-08-09 Thread Adrian Chadd
don't do that.

As someone who did this 10+ years, I suggest you do this.

* do some hackery to find out how your freeradius server stores the
currently logged in users. It may be in a mysql database, it may be
in a disk file, etc, etc
* have your redirector query -that- directly, rather than running
radwho. When I did this 10 years ago, the radius server kept a wtmp
style file with current logins which worked okish for a few dozen
users, then sucked for a few hundred users. I ended up replacing it
with a berkeley DB hash table to make searching for users faster.
* then in the helper, cache the IP results for a short period (say, 5
to 10 seconds) so frequent page accesses wouldn't result in a flurry
of requests to the backend
* keep the number of helpers low - you're doing it wrong if you need
more than 5 or 6 helpers doing this..



Adrian

2009/8/8  mic...@casa.co.cu:
 Hello

 Using squid 2.6 on my work, I have a group of users who connect by dial-up
 access to a NAS and a server freeradius to authenticate each time they log
 my users are assigned a dynamic IP address, making it impossible to create
 permissions without authentication by IP address.

 now to assign levels of access to sites are
 authenticating against an Active Directory, but I want to change that.

 I want to create a script for when you get a request to the squid from the
 block of IP addresses, run a script that reads the username and IP address
 from the server freeradius radwho tool that shows users connected + ip
 address or mysql  from which you can achieve the same process

 and can be compared to a text file if the user is listed, then access it
 without authentication of any kind.

 It is possible to do this?

 Sorry for my english, is very poor.

 Thanks

 Michel





 --
 Webmail, servicio de correo electronico
 Casa de las Americas - La Habana, Cuba.




Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

2009-08-16 Thread Adrian Chadd
The pipelining used by speedtest.net and such won't really get a
benefit from the current squid pipelining support.



Adrian

2009/8/15 Daniel sq...@zoomemail.com:
 Henrik,

        I added 'pipeline_prefetch on' to my squid.conf and it still isn't 
 working right. I've pasted my entire squid.conf below, if you have anything 
 extra turned on/off or et cetera than please let me know and I'll try it.  
 Thanks!

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl TestPoolIPs src lpt-hdq-dmtqq31 wksthdq88w
 acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl sclthdq01w src 10.211.194.187/32    # custom acl for apache/cache manager
 acl SSL_ports port 443
 acl Safe_ports port 80          # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443         # https
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 210         # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access allow manager sclthdq01w
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 #http_access allow localnet
 http_access allow localhost
 http_access allow TestPoolIPs
 http_access deny all
 http_port 3128
 hierarchy_stoplist cgi-bin ?
 coredump_dir /usr/local/squid/var/cache
 cache_mem 512 MB
 pipeline_prefetch on
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
 refresh_pattern .               0       20%     4320

 -Original Message-
 From: Henrik Lidström [mailto:free...@lidstrom.eu]
 Sent: Monday, August 10, 2009 8:16 PM
 To: Daniel
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

 Daniel skrev:
 Kinkie,

       I'm using the default settings, so I don't have any specific max 
 request sizes specified. I guess I'll hold out until someone else running 
 3.1 can test this.

 Thanks!

 -Original Message-
 From: Kinkie [mailto:gkin...@gmail.com]
 Sent: Saturday, August 08, 2009 6:44 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

 Maybe the failure could depend on some specific settings, such as max
 request size?

 On 8/8/09, Heinz Diehl h...@fancy-poultry.org wrote:

 On 08.08.2009, Daniel wrote:


 Would anyone else using Squid mind doing this same bandwidth test and
 seeing
 if they have the same issue(s)?

 It works flawlessly using both 2.7-STABLE6 and 3.0-STABLE18 here.






 Squid Cache: Version 3.1.0.13

 Working without a problem, tested multiple sites on the list.
 Nothing special in the config except maybe pipeline_prefetch on

 /Henrik




Re: [squid-users] Distributed High Performance Squid

2009-08-20 Thread Adrian Chadd
Squid doesn't share memory or disk cache at the moment. It won't
share/slice filedescriptors the way you want them to.

I could probably write a unified logging hack so multiple squid
processes log to the same file via a single helper that handles
multiple pipes or something, one from each Squid. There's no atomic
append a line IO method in UNIX so doing it that way won't work.

You could try hacking things up to lock/unlock the file for each
logfile write but I have no idea what the impact would be.

Adrian


2009/8/20 Joel Ebrahimi jebrah...@bivio.net:
 Hi,

 Im trying to build a high performance squid. The performance actually
 comes from the hardware without changes to the code base. I am a
 beginning user of squid so I figured I would ask the list for the
 best/different way of setting up this configuration.

 The architecture set up is like this: There are 12 cpu cores that each
 run an instance of squid, each of these 12 cores has access to the same
 disk space but not the same memory, each is its own instance of an OS
 and they can communicate on an internal network, there is a network
 processor that slices up sessions and can hand them off to any one of
 the 12 cores that is available, there is a single conf file and a single
 logging directory.

 The current problem I can see with this set up is that each of the 12
 instances of squid acts individually, therefore any one of them could
 try to access the same log file at the same time. Im not sure what
 impact this could cause with overwriting data.

 I actually have it set up this way now and it works well though it's a
 very small test environment and Im concerned issues may only pop up in
 larger environments when accessing the logs is very frequent.

 I was looking through some online materials and I saw there are other
 mechanisms for log formatting. The ones that I thought may be of use
 here are either the daemon or udp. There is actually a 13th core in the
 system that is used for management. I was wondering if setting up udp
 logging on this 13th core and having the 12 instances of squid send the
 log info over the internal network would work.

 Thought or better ideas? Problems with either of these scenarios?


 Thanks in advance,

 // Joel

 jebrah...@bivio.net

 Joel Ebrahimi
 Solutions Engineer
 Bivio Networks
 925.924.8681
 jebrah...@bivio.net




Re: [squid-users] 'gprof squid squid.gmon' only shows the initial configuration functions

2009-12-09 Thread Adrian Chadd
Talk to the freebsd guys (eg me) about pmcstat and support for your
hardware. You may just need to find / organise a backport of the
particular hardware support for your platform. I've been working on
profiling Lusca with pmcstat and some new-ish tools which use and
extend it in useful ways.

gprof data is almost certainly uselessly unreliable on modern CPUs.
Too much can and will happen between profiling ticks.

I can hazard a few guesses about where your CPU is going. Likely
candidate is poll() if your Squid is too old. First thing to do is
organise porting the kqueue() stuff if it isn't already included.

I can make more educated guesses about where the likely CPU hog
culprits are given workload and configuration file information.



Adrian

2009/12/10 Guy Bashkansky guy...@gmail.com:
 Is there an oprofile version for FreeBSD?  I thought it is limited to
 Linux.  On FreeBSD I tried pmcstat, but it gives an initialization
 error.

 My version of Squid is old and customized (so I can't upgrade) and may
 not have builtin timers.  Since what version did they appear?

 As for gprof - even with the event loop on top, still the rest of the
 table might give some idea why the CPU is overloaded.  The problem is
 - I see only initial configuration functions:

                                 called/total       parents
 index  %time    self descendents  called+self    name           index
                                 called/total       children
                                                    spontaneous
 [1]     63.4    0.17        0.00                 _mcount [1]
 ---
               0.00        0.10       1/1           _start [3]
 [2]     36.0    0.00        0.10       1         main [2]
               0.00        0.10       1/1           parseConfigFile [4]
 ...
 ---
                                                    spontaneous
 [3]     36.0    0.00        0.10                 _start [3]
               0.00        0.10       1/1           main [2]
 ---
               0.00        0.10       1/1           main [2]
 [4]     36.0    0.00        0.10       1         parseConfigFile [4]
               0.00        0.09       1/1           readConfigLines [5]
               0.00        0.00     169/6413        parse_line [6]
 ..
 

 System info:

 # uname -m -r -s
 FreeBSD 6.2-RELEASE-p9 amd64

 # gcc -v
 Using built-in specs.
 Configured with: FreeBSD/amd64 system compiler
 Thread model: posix
 gcc version 3.4.6 [FreeBSD] 20060305


 There are 7 fork()s for unlinkd/diskd helpers.  Can these fork()s
 affect profiling info?

 On Wed, Dec 9, 2009 at 2:04 AM, Robert Collins
 robe...@robertcollins.net wrote:
 On Tue, 2009-12-08 at 15:32 -0800, Guy Bashkansky wrote:
 I've built squid with the -pg flag and run it in the no-daemon mode
 (-N flag), without the initial fork().

 I send it the SIGTERM signal which is caught by the signal handler, to
 flag graceful exit from main().

 I expect to see meaningful squid.gmon, but 'gprof squid squid.gmon'
 only shows the initial configuration functions:

 gprof isn't terribly useful anyway - due to squids callback based model,
 it will see nearly all the time belonging to the event loop.

 oprofile and/or squids built in analytic timers will get much better
 info.

 -Rob





Re: [squid-users] Squid url_rewrite and cookie

2010-01-06 Thread Adrian Chadd
Please create an Issue and attach the patch. I'll see about including it!




adrian

2010/1/6 Rajesh Nair rajesh.nair...@gmail.com:
 Thanks for the response, Matt!

 Unfortunately the cooperating HTTP service solution would not work
 as I need to set the cookie for the same domain for which the request
 is coming and that happens only when the request comes to the squid
 proxy.

 I have resolved it by extending the squid-url_rewrite protocol to
 accept the cookie string too and modifying the squid code to send the
 cookie in the 302 redirect response.

 Let me know if anybody is interested in the patch!

 Thanks,
 Rajesh

 On Tue, Jan 5, 2010 at 9:41 AM, Matt W. Benjamin m...@linuxbox.com wrote:
 Hi,

 Yes, you cannot (could not) per se.  However, you can rewrite to a 
 cooperating HTTP service which sets a cookie.  And, if you had adjusted 
 Squid so as to pass cookie data to url_rewriter programs, you could also 
 inspect the cookie in it on future requests.

 Matt

 - Rajesh Nair rajesh.nair...@gmail.com wrote:


 Reading the docs , it looks like it is not possible to send any
 HTTP
 response header from the url_rewriter program and the url_rewriter
 merely can return the redirected URI.
 Is this correct?

 Thanks,
 Rajesh

 --

 Matt Benjamin

 The Linux Box
 206 South Fifth Ave. Suite 150
 Ann Arbor, MI  48104

 http://linuxbox.com

 tel. 734-761-4689
 fax. 734-769-8938
 cel. 734-216-5309





Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-24 Thread Adrian Chadd
You need to set ulimit -n newlimit before you compile and
run Squid. Maybe you don't need to do it before compilation
these days, I forget.

ulimit -n 32768
check ulimit -a
Then ./configure
make
make install
put ulimit -n 32768 in the startup script
start squid
check cachemgr info page and/or the cache.log; it'll say how many
file descriptors its starting with.

On Thu, Jan 24, 2008, bijayant kumar wrote:
 Hello list,
 
 I am using squid as proxy server on gentoo box. All of
 a sudden from
  2nd January in my cache.log i am seeing the error 
 
 WARNING! Your cache is running out of filedescriptors
 
 When this messages repeats frequently, browsing
 becomes dead slow in
  2mbps line. We have 2GB RAM, and 1 GB swap , dual
 core processor system.
 
 After googling, checking Squid Faq i have tried to
 increase the limit
  of filedescriptors on my system. But i am not able to
 do. Please help me
  out. here i am giving some information for better
 picture
 
 OS - gentoo
 Kernel - 2.6.18-gentoo-r6
 Squid - net-proxy/squid-2.6.12
USE Flags=ipf-transparent pam ssl
 
 I have changed the filedescriptors in 
 /usr/include/bits/typesizes.h
 
 Number of descriptors that can fit in an `fd_set'
 #define __FD_SETSIZE2048
 
 
 In /etc/init.d/squid
 ulimit -HSn 2048
 
 ~ $ cat /proc/sys/fs/file-max
 50516
 
 The relevent part of /etc/squid/squid.conf after
 search on google/faq
 
 client_persistent_connections off
 server_persistent_connections off
 cache_dir ufs /var/cache/squid 2000 16 256
 url_rewrite_children 30
 
 
 I did all things specified in Squid Wiki and Faq.
 After that i have
  recompiled the squid and rebooted my machine also
 without any luck. I am
  still getting the warning in my logs, and ulimit -n
 as 1024.
 
 I have tried all possible things without any success.
 Please help me or
  give me some direction.
 
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] WCCP Routing

2008-01-24 Thread Adrian Chadd
On Thu, Jan 24, 2008, Dave Raven wrote:
 Hi all,
   Is it possible to make the request back out the router that sent in
 a WCCP packet to begin with? For example if you have two routers, and router
 A sends request A and router B sends request B to send them back through
 their origin routers, regardless of your default route etc so that B will
 stick with B and A with A ?

Not that I can think of, but its definitely something that should be 
implemented.
Similar for L2/GRE packet return.




Adrian



Re: [squid-users] Re: Strange issues with squid

2008-01-24 Thread Adrian Chadd
On Thu, Jan 24, 2008, Ryan Thoryk wrote:
 I've got more information (on the FreeBSD side):
 
 The packets are coming in over the GRE interface, but seem to be 
 randomly disappearing after the IPFW forward operation (forwards to 
 localhost:3128).
 
 Here's the ipfw config:
 00150 fwd 127.0.0.1,3128 tcp from any to any dst-port 80 via gre0 in

 00250 fwd 127.0.0.1,3128 ip from any to any via gre0 in

Why are you doing that?
You don't need to redirect the ip at all. Well, in theory you -should-
be to handle ICMP messages, but I don't think that works at all atm
(and is an OS related issue.)

Just do:

add fwd 127.0.0.1,3128 tcp from any to any 80 in via gre0
add fwd 127.0.0.1,3128 tcp from any to any 80 in via gre1
add fwd 127.0.0.1,3128 tcp from any to any 80 in via gre2

.. etc

And see what that does.

I've got multiple WCCPv2 aware routers but i'm in the middle of getting
TPROXY stuff documented and so I can't easily change it all around
to support multiple routers with potential asymmetric traffic paths
for WCCPv2 (which is what you're trying to achieve.)
That requires quite a lot of time :/



Adrian



Re: [squid-users] WCCP Routing

2008-01-24 Thread Adrian Chadd
On Thu, Jan 24, 2008, Jason Taylor wrote:
 I worked around that a few years ago by having multiple instances of 
 squid on my server, each with its own IP and dedicated squid.conf
 Each router would connect to its own squid instance and linux policy 
 routing would determine the default gateway to use.
 The downside is that you are now effective doubling the number of squids 
 that you manage.

I think what you need to do is to be able to set the next-hop adjacency on
a particular socket, overriding the kernels' routing table lookup(s).
You also need a way to discover which router sent you that particular
redirected packet(s).

There may be a way to do it in kernel-land without ever involving the
userland process but I'd have to give it a bit more thought.





Adrian



Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-24 Thread Adrian Chadd
Yo'ure using UFS and small filedescriptor counts.
Recompile with 16384 filedescriptors and enable AUFS.


Adrian


On Fri, Jan 25, 2008, bijayant kumar wrote:
 Hi Arana,
 
 Thanks for your reply. As you are suggesting in your
 reply that incresing the filedescriptor can be
 dangerous. Is there any other way to get rid of this
 warning, because this warning makes browsing dead
 slow,and the box is deployed at our client place. I
 have to do things fast. If you have any other
 suggestion besides the increasing file descriptor
 please suggest me.
 
 
 
 
 
 --- Gonzalo Arana [EMAIL PROTECTED] wrote:
 
  I would recommend you to run ./configure with
  --with-maxfd=you_desired_limit and --enable-epoll
  
  Watch for messages like this in configure output:
  checking if epoll works... yes
  Using epoll for the IO loop.
  ...
  Maximum filedescriptors set to 131072
  ...
  
  Having large number of FDs with select is dangerous.
   Also, I recall
  there was an issue on increasing FD_SETSIZE on glibc
  (Linux uses
  glibc).
  
  HTH,
  
  On Jan 24, 2008 11:46 AM, Bijayant
  [EMAIL PROTECTED] wrote:
   Hello list,
  
   I am using squid as proxy server on gentoo box.
  All of a sudden from
2nd January in my cache.log i am seeing the error
  
   WARNING! Your cache is running out of
  filedescriptors
  
   When this messages repeats frequently, browsing
  becomes dead slow in
2mbps line. We have 2GB RAM, and 1 GB swap , dual
  core processor system.
  
   After googling, checking Squid Faq i have tried to
  increase the limit
of filedescriptors on my system. But i am not
  able to do. Please help me
out. here i am giving some information for better
  picture
  
   OS - gentoo
   Kernel - 2.6.18-gentoo-r6
   Squid - net-proxy/squid-2.6.12
  USE Flags=ipf-transparent pam ssl
  
   I have changed the filedescriptors in 
  /usr/include/bits/typesizes.h
  
   Number of descriptors that can fit in an `fd_set'
   #define __FD_SETSIZE2048
  
  
   In /etc/init.d/squid
   ulimit -HSn 2048
  
   ~ $ cat /proc/sys/fs/file-max
   50516
  
   The relevant part of /etc/squid/squid.conf after
  search on google/faq
  
  
   client_persistent_connections off
   server_persistent_connections off
   cache_dir ufs /var/cache/squid 2000 16 256
   url_rewrite_children 30
  
  
   I did all things specified in Squid Wiki and Faq.
  After that i have
recompiled the squid and rebooted my machine also
  without any luck. I am
still getting the warning in my logs, and ulimit
  -n as 1024.
  
   I have tried all possible things without any
  success. Please help me or
give me some direction.
  
  
  
  
  
  -- 
  Gonzalo A. Arana
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-24 Thread Adrian Chadd
Check ./squid -v ; see what it was compiled with.
Make sure you start squid after you change the default ulimit.
Either put ulimit -n 8192 at the top of the squid startup
script or find the place where your default ulimits are set
and modify that.



Adrian

On Fri, Jan 25, 2008, bijayant kumar wrote:
 Hi,
 
 While checking the squid-2.6.17.ebuild file i found in
 econf section there is a line --with-maxfd=8192. It
 means that squid has been compiled with 8192
 descriptors, right ?? But in cache.log it says 1024
 file descriptors are available and complains about the
 running out of file descriptors. Shall i have to
 recompile squid again in this case also ??
 Please guide me
 
 
 
 --- Manoj_Rajkarnikar [EMAIL PROTECTED] wrote:
 
  On Fri, 25 Jan 2008, bijayant kumar wrote:
  
   Hi Arana,
  
   Thanks for your reply. As you are suggesting in
  your
   reply that incresing the filedescriptor can be
   dangerous. Is there any other way to get rid of
  this
   warning, because this warning makes browsing dead
   slow,and the box is deployed at our client place.
  I
   have to do things fast. If you have any other
   suggestion besides the increasing file descriptor
   please suggest me.
  
  
  No AFAIK. you'll have to raise the FD limit but
  don't raise it to tooo 
  high - that was the suggestion.. set it to 2048 or
  4096 to meet the 
  current and near-future workload requirement and
  increase it again in the 
  future if needed...
  
  
  
  
  
   --- Gonzalo Arana [EMAIL PROTECTED] wrote:
  
   I would recommend you to run ./configure with
   --with-maxfd=you_desired_limit and --enable-epoll
  
   Watch for messages like this in configure output:
   checking if epoll works... yes
   Using epoll for the IO loop.
   ...
   Maximum filedescriptors set to 131072
   ...
  
   Having large number of FDs with select is
  dangerous.
Also, I recall
   there was an issue on increasing FD_SETSIZE on
  glibc
   (Linux uses
   glibc).
  
   HTH,
  
   On Jan 24, 2008 11:46 AM, Bijayant
   [EMAIL PROTECTED] wrote:
   Hello list,
  
   I am using squid as proxy server on gentoo box.
   All of a sudden from
2nd January in my cache.log i am seeing the
  error
  
   WARNING! Your cache is running out of
   filedescriptors
  
   When this messages repeats frequently, browsing
   becomes dead slow in
2mbps line. We have 2GB RAM, and 1 GB swap ,
  dual
   core processor system.
  
   After googling, checking Squid Faq i have tried
  to
   increase the limit
of filedescriptors on my system. But i am not
   able to do. Please help me
out. here i am giving some information for
  better
   picture
  
   OS - gentoo
   Kernel - 2.6.18-gentoo-r6
   Squid - net-proxy/squid-2.6.12
  USE Flags=ipf-transparent pam ssl
  
   I have changed the filedescriptors in
   /usr/include/bits/typesizes.h
  
   Number of descriptors that can fit in an
  `fd_set'
   #define __FD_SETSIZE2048
  
  
   In /etc/init.d/squid
   ulimit -HSn 2048
  
   ~ $ cat /proc/sys/fs/file-max
   50516
  
   The relevant part of /etc/squid/squid.conf after
   search on google/faq
  
  
   client_persistent_connections off
   server_persistent_connections off
   cache_dir ufs /var/cache/squid 2000 16 256
   url_rewrite_children 30
  
  
   I did all things specified in Squid Wiki and
  Faq.
   After that i have
recompiled the squid and rebooted my machine
  also
   without any luck. I am
still getting the warning in my logs, and
  ulimit
   -n as 1024.
  
   I have tried all possible things without any
   success. Please help me or
give me some direction.
  
  
  
  
  
   --
   Gonzalo A. Arana
  
  
  
   Bijayant Kumar
  
   Send instant messages to your online friends
  http://uk.messenger.yahoo.com
  
  
  -- 
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-24 Thread Adrian Chadd
Try running squid manually; see if that works better.



Adrian

On Fri, Jan 25, 2008, bijayant kumar wrote:
 Hi,
 I did recompiled squid and entere the first line in
 /etc/init.d/squid as
 ulimit -n 8192
 
 and output of  squid -v 
 
 configure options:  '--prefix=/usr'
 '--host=i686-pc-linux-gnu' '--mandir=/usr/share/man'
 '--infodir=/usr/share/info' '--datadir=/usr/share'
 '--sysconfdir=/etc' '--localstatedir=/var/lib'
 '--sysconfdir=/etc/squid'
 '--libexecdir=/usr/libexec/squid'
 '--localstatedir=/var' '--datadir=/usr/share/squid'
 '--enable-auth=basic,digest,ntlm'
 '--enable-removal-policies=lru,heap'
 '--enable-digest-auth-helpers=password'
 '--enable-basic-auth-helpers=PAM,LDAP,getpwnam,NCSA,MSNT'
 '--enable-external-acl-helpers=ldap_group,ip_user,session,unix_group'
 '--enable-ntlm-auth-helpers=fakeauth'
 '--enable-ident-lookups' '--enable-useragent-log'
 '--enable-cache-digests' '--enable-delay-pools'
 '--enable-referer-log' '--enable-arp-acl'
 '--with-pthreads' '--with-large-files' '--enable-htcp'
 '--enable-carp' '--enable-follow-x-forwarded-for'
 '***--with-maxfd=8192' '--enable-snmp'
 '--enable-ssl'
 '--enable-storeio=ufs,diskd,coss,aufs,null'
 '--enable-async-io' '--enable-linux-netfilter'
 '--enable-epoll' '--build=i686-pc-linux-gnu'
 'build_alias=i686-pc-linux-gnu'
 'host_alias=i686-pc-linux-gnu'
 'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=i686 '
 
 But while restarting squid it still says
 
 2008/01/25 11:33:25| With 1024 file descriptors
 available
 
 Please help me
 
 --- Adrian Chadd [EMAIL PROTECTED] wrote:
 
  Check ./squid -v ; see what it was compiled with.
  Make sure you start squid after you change the
  default ulimit.
  Either put ulimit -n 8192 at the top of the squid
  startup
  script or find the place where your default ulimits
  are set
  and modify that.
  
  
  
  Adrian
  
  On Fri, Jan 25, 2008, bijayant kumar wrote:
   Hi,
   
   While checking the squid-2.6.17.ebuild file i
  found in
   econf section there is a line --with-maxfd=8192.
  It
   means that squid has been compiled with 8192
   descriptors, right ?? But in cache.log it says
  1024
   file descriptors are available and complains about
  the
   running out of file descriptors. Shall i have to
   recompile squid again in this case also ??
   Please guide me
   
   
   
   --- Manoj_Rajkarnikar [EMAIL PROTECTED] wrote:
   
On Fri, 25 Jan 2008, bijayant kumar wrote:

 Hi Arana,

 Thanks for your reply. As you are suggesting
  in
your
 reply that incresing the filedescriptor can be
 dangerous. Is there any other way to get rid
  of
this
 warning, because this warning makes browsing
  dead
 slow,and the box is deployed at our client
  place.
I
 have to do things fast. If you have any other
 suggestion besides the increasing file
  descriptor
 please suggest me.


No AFAIK. you'll have to raise the FD limit but
don't raise it to tooo 
high - that was the suggestion.. set it to 2048
  or
4096 to meet the 
current and near-future workload requirement and
increase it again in the 
future if needed...





 --- Gonzalo Arana [EMAIL PROTECTED]
  wrote:

 I would recommend you to run ./configure with
 --with-maxfd=you_desired_limit and
  --enable-epoll

 Watch for messages like this in configure
  output:
 checking if epoll works... yes
 Using epoll for the IO loop.
 ...
 Maximum filedescriptors set to 131072
 ...

 Having large number of FDs with select is
dangerous.
  Also, I recall
 there was an issue on increasing FD_SETSIZE
  on
glibc
 (Linux uses
 glibc).

 HTH,

 On Jan 24, 2008 11:46 AM, Bijayant
 [EMAIL PROTECTED] wrote:
 Hello list,

 I am using squid as proxy server on gentoo
  box.
 All of a sudden from
  2nd January in my cache.log i am seeing the
error

 WARNING! Your cache is running out of
 filedescriptors

 When this messages repeats frequently,
  browsing
 becomes dead slow in
  2mbps line. We have 2GB RAM, and 1 GB swap
  ,
dual
 core processor system.

 After googling, checking Squid Faq i have
  tried
to
 increase the limit
  of filedescriptors on my system. But i am
  not
 able to do. Please help me
  out. here i am giving some information for
better
 picture

 OS - gentoo
 Kernel - 2.6.18-gentoo-r6
 Squid - net-proxy/squid-2.6.12
USE Flags=ipf-transparent pam ssl

 I have changed the filedescriptors in
 /usr/include/bits/typesizes.h

 Number of descriptors that can fit in an
`fd_set'
 #define __FD_SETSIZE2048


 In /etc/init.d/squid
 ulimit -HSn 2048

 ~ $ cat /proc/sys/fs/file-max
 50516

 The relevant part of /etc/squid/squid.conf
  after
 search on google/faq

Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-24 Thread Adrian Chadd
On Fri, Jan 25, 2008, bijayant kumar wrote:
 Hi,
 
 [EMAIL PROTECTED] ~ $ ulimit -n
 1024

Thats your problem.

Check your pam security file. under ubuntu its:

[EMAIL PROTECTED]:~$ cat /etc/security/limits.conf | grep nofile
#- nofile - max number of open files
*   hardnofile  8192


Adrian

 
 I have tried to change this via giving the command 
 ulimit -HSn 8192
 
 /usr/include/bits/typesizes.h
 /* Number of descriptors that can fit in an `fd_set'. 
 */
 #define __FD_SETSIZE8192
 
 rebooted my machine several times, also
 cat /proc/sys/fs/file-max
 11032
 
 I am using gentoo.
 --- Ding Deng [EMAIL PROTECTED] wrote:
 
  bijayant kumar [EMAIL PROTECTED] writes:
  
   I have tried it also, but in log files it still
  saying
   1024 file descriptors available. while squid is
   compile with 8192.
  
  ulimit -n
  
  What's the output?
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-25 Thread Adrian Chadd
On Fri, Jan 25, 2008, bijayant kumar wrote:
 I have already entered that line 
 
 cat /etc/security/limits.conf | grep nofile
 *   hardnofile  8192
 
 No luck till now. I am restarting squid several times
 to avoid the slow browsing.

I'd go and ask someone on the gentoo lists at this point to find out
what you can do to make sure the ulimit is properly extended.




Adrian

 
 -- Adrian Chadd [EMAIL PROTECTED] wrote:
 
  On Fri, Jan 25, 2008, bijayant kumar wrote:
   Hi,
   
   [EMAIL PROTECTED] ~ $ ulimit -n
   1024
  
  Thats your problem.
  
  Check your pam security file. under ubuntu its:
  
  [EMAIL PROTECTED]:~$ cat /etc/security/limits.conf | grep
  nofile
  #- nofile - max number of open files
  *   hardnofile  8192
  
  
  Adrian
  
   
   I have tried to change this via giving the command
  
   ulimit -HSn 8192
   
   /usr/include/bits/typesizes.h
   /* Number of descriptors that can fit in an
  `fd_set'. 
   */
   #define __FD_SETSIZE8192
   
   rebooted my machine several times, also
   cat /proc/sys/fs/file-max
   11032
   
   I am using gentoo.
   --- Ding Deng [EMAIL PROTECTED] wrote:
   
bijayant kumar [EMAIL PROTECTED] writes:

 I have tried it also, but in log files it
  still
saying
 1024 file descriptors available. while squid
  is
 compile with 8192.

ulimit -n

What's the output?

   
   
   Bijayant Kumar
   
   Send instant messages to your online friends
  http://uk.messenger.yahoo.com 
  
  -- 
  - Xenion - http://www.xenion.com.au/ - VPS Hosting -
  Commercial Squid Support -
  - $25/pm entry-level VPSes w/ capped bandwidth
  charges available in WA -
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] [SOLVED]Re: [squid-users] Unable to increase filedescriptor limit -- tried all things

2008-01-25 Thread Adrian Chadd
Graph the statistics from Squid, including SNMP.

This'll include the number of active and free filedescriptors. :)



Adrian


On Fri, Jan 25, 2008, bijayant kumar wrote:
 Thanks to all you and all the members of the list who
 has supported me to rectify this problem. 
 The startup/init script of squid was wrong, which
 comes in gentoo. It was setting the filedescriptor
 limit to 1024. I started it manually as suggested by
 Adrian Chadd, and it started showing 8192 file
 descriptors available. Once again thanx a ton.
 
 Now I want a suggestion from you that is 8192 file
 descriptor sufficient for 200 to 300 users in the
 network(High Traffic) or it should be more high. Other
 than this any other parameter in squid.conf to look
 for. I made it 8192 because of it is specified in the
 ebuild.
 
 
 --- Adrian Chadd [EMAIL PROTECTED] wrote:
 
  On Fri, Jan 25, 2008, bijayant kumar wrote:
   I have already entered that line 
   
   cat /etc/security/limits.conf | grep nofile
   *   hardnofile  8192
   
   No luck till now. I am restarting squid several
  times
   to avoid the slow browsing.
  
  I'd go and ask someone on the gentoo lists at this
  point to find out
  what you can do to make sure the ulimit is properly
  extended.
  
  
  
  
  Adrian
  
   
   -- Adrian Chadd [EMAIL PROTECTED] wrote:
   
On Fri, Jan 25, 2008, bijayant kumar wrote:
 Hi,
 
 [EMAIL PROTECTED] ~ $ ulimit -n
 1024

Thats your problem.

Check your pam security file. under ubuntu its:

[EMAIL PROTECTED]:~$ cat /etc/security/limits.conf |
  grep
nofile
#- nofile - max number of open files
*   hardnofile  8192


Adrian

 
 I have tried to change this via giving the
  command

 ulimit -HSn 8192
 
 /usr/include/bits/typesizes.h
 /* Number of descriptors that can fit in an
`fd_set'. 
 */
 #define __FD_SETSIZE8192
 
 rebooted my machine several times, also
 cat /proc/sys/fs/file-max
 11032
 
 I am using gentoo.
 --- Ding Deng [EMAIL PROTECTED] wrote:
 
  bijayant kumar [EMAIL PROTECTED]
  writes:
  
   I have tried it also, but in log files it
still
  saying
   1024 file descriptors available. while
  squid
is
   compile with 8192.
  
  ulimit -n
  
  What's the output?
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends
http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS
  Hosting -
Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth
charges available in WA -

   
   
   Bijayant Kumar
   
   Send instant messages to your online friends
  http://uk.messenger.yahoo.com 
  
  -- 
  - Xenion - http://www.xenion.com.au/ - VPS Hosting -
  Commercial Squid Support -
  - $25/pm entry-level VPSes w/ capped bandwidth
  charges available in WA -
  
 
 
 Bijayant Kumar
 
 Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] error on compiling

2008-01-25 Thread Adrian Chadd
On Fri, Jan 25, 2008, Rafael Donggon wrote:
 Below is the config.log error message upon compiling..
 I am using Ubuntu 7.10 Desktop Ed.

Make sure you install the meta-package build-essential, and
not individual packages.

It sounds like you're missing part of the development libraries.



Adrian

 
 Please Help.
 
 Thank you.
 
 config.log error:
 This file contains any messages produced by compilers
 while
 running configure, to aid debugging if configure makes
 a mistake.
 
 It was created by Squid Web Proxy configure
 2.6.STABLE18, which was
 generated by GNU Autoconf 2.61.  Invocation command
 line was
 
   $ ./configure 
 
 ## - ##
 ## Platform. ##
 ## - ##
 
 hostname = ubuntu
 uname -m = i686
 uname -r = 2.6.22-14-generic
 uname -s = Linux
 uname -v = #1 SMP Sun Oct 14 23:05:12 GMT 2007
 
 /usr/bin/uname -p = unknown
 /bin/uname -X = unknown
 
 /bin/arch  = unknown
 /usr/bin/arch -k   = unknown
 /usr/convex/getsysinfo = unknown
 /usr/bin/hostinfo  = unknown
 /bin/machine   = unknown
 /usr/bin/oslevel   = unknown
 /bin/universe  = unknown
 
 PATH: /usr/local/sbin
 PATH: /usr/local/bin
 PATH: /usr/sbin
 PATH: /usr/bin
 PATH: /sbin
 PATH: /bin
 PATH: /usr/X11R6/bin
 
 
 ## --- ##
 ## Core tests. ##
 ## --- ##
 
 configure:2080: checking for a BSD-compatible install
 configure:2136: result: /usr/bin/install -c
 configure:2147: checking whether build environment is
 sane
 configure:2190: result: yes
 configure:2255: checking for gawk
 configure:2285: result: no
 configure:2255: checking for mawk
 configure:2271: found /usr/bin/mawk
 configure:2282: result: mawk
 configure:2293: checking whether make sets $(MAKE)
 configure:2314: result: yes
 configure:2499: checking whether to enable
 maintainer-specific portions of Makefiles
 configure:2508: result: no
 configure:2579: checking for gcc
 configure:2595: found /usr/bin/gcc
 configure:2606: result: gcc
 configure:2844: checking for C compiler version
 configure:2851: gcc --version 5
 gcc (GCC) 4.1.3 20070929 (prerelease) (Ubuntu
 4.1.2-16ubuntu2)
 Copyright (C) 2006 Free Software Foundation, Inc.
 This is free software; see the source for copying
 conditions.  There is NO
 warranty; not even for MERCHANTABILITY or FITNESS FOR
 A PARTICULAR PURPOSE.
 
 configure:2854: $? = 0
 configure:2861: gcc -v 5
 Using built-in specs.
 Target: i486-linux-gnu
 Configured with: ../src/configure -v
 --enable-languages=c,c++,fortran,objc,obj-c++,treelang
 --prefix=/usr --enable-shared --with-system-zlib
 --libexecdir=/usr/lib --without-included-gettext
 --enable-threads=posix --enable-nls
 --with-gxx-include-dir=/usr/include/c++/4.1.3
 --program-suffix=-4.1 --enable-__cxa_atexit
 --enable-clocale=gnu --enable-libstdcxx-debug
 --enable-mpfr --enable-checking=release i486-linux-gnu
 Thread model: posix
 gcc version 4.1.3 20070929 (prerelease) (Ubuntu
 4.1.2-16ubuntu2)
 configure:2864: $? = 0
 configure:2871: gcc -V 5
 gcc: '-V' option must have argument
 configure:2874: $? = 1
 configure:2897: checking for C compiler default output
 file name
 configure:2924: gcc   -g conftest.c  5
 /usr/bin/ld: crt1.o: No such file: No such file or
 directory
 collect2: ld returned 1 exit status
 configure:2927: $? = 1
 configure:2965: result: 
 configure: failed program was:
 | /* confdefs.h.  */
 | #define PACKAGE_NAME Squid Web Proxy
 | #define PACKAGE_TARNAME squid
 | #define PACKAGE_VERSION 2.6.STABLE18
 | #define PACKAGE_STRING Squid Web Proxy
 2.6.STABLE18
 | #define PACKAGE_BUGREPORT
 http://www.squid-cache.org/bugs/;
 | #define PACKAGE squid
 | #define VERSION 2.6.STABLE18
 | /* end confdefs.h.  */
 | 
 | int
 | main ()
 | {
 | 
 |   ;
 |   return 0;
 | }
 configure:2972: error: C compiler cannot create
 executables
 See `config.log' for more details.
 
 ##  ##
 ## Cache variables. ##
 ##  ##
 
 ac_cv_env_CC_set=
 ac_cv_env_CC_value=
 ac_cv_env_CFLAGS_set=
 ac_cv_env_CFLAGS_value=
 ac_cv_env_CPPFLAGS_set=
 ac_cv_env_CPPFLAGS_value=
 ac_cv_env_CPP_set=
 ac_cv_env_CPP_value=
 ac_cv_env_LDFLAGS_set=
 ac_cv_env_LDFLAGS_value=
 ac_cv_env_LIBS_set=
 ac_cv_env_LIBS_value=
 ac_cv_env_build_alias_set=
 ac_cv_env_build_alias_value=
 ac_cv_env_host_alias_set=
 ac_cv_env_host_alias_value=
 ac_cv_env_target_alias_set=
 ac_cv_env_target_alias_value=
 ac_cv_path_install='/usr/bin/install -c'
 ac_cv_prog_AWK=mawk
 ac_cv_prog_ac_ct_CC=gcc
 ac_cv_prog_make_make_set=yes
 
 ## - ##
 ## Output variables. ##
 ## - ##
 
 ACLOCAL='${SHELL}
 /tmp/squid-2.6.STABLE18/cfgaux/missing --run
 aclocal-1.9'
 ALLOCA=''
 AMDEPBACKSLASH=''
 AMDEP_FALSE=''
 AMDEP_TRUE=''
 AMTAR='${SHELL} /tmp/squid-2.6.STABLE18/cfgaux/missing
 --run tar'
 AR=''
 AR_R=''
 AUTH_LIBS=''
 AUTH_MODULES=''
 AUTH_OBJS=''
 AUTOCONF='${SHELL}
 /tmp/squid-2.6.STABLE18/cfgaux/missing --run autoconf'
 AUTOHEADER='${SHELL}
 /tmp/squid-2.6.STABLE18/cfgaux/missing --run
 autoheader'
 AUTOMAKE='${SHELL}
 

Re: [squid-users] Vary the cache objects based on the incoming http version, or buckets for different browsers

2008-01-25 Thread Adrian Chadd
On Tue, Jan 22, 2008, Tory M Blue wrote:

 Ideas? Yes 2.7, but trying to get an idea how stable folks think it
 is, as it's been noted it has more http/1.1 functionality.

2.7 seems stable enough for the handful of people who have evaluated it.
The only real way to know if its stbale for you is, well, if you try it
and report back. :)



Adrian



Re: [squid-users] Google We're sorry... message

2008-01-27 Thread Adrian Chadd
I've only come across this in my own testing.

I'd suggest you find a support contact at google and provide your
details (at least the public IPs of your proxy servers) and see
what Google have to say.

There's no workaround that I can think of with Squid that doesn't involve
mapping users to different public IP addresses, or to hack Squid to detect
403's from Google and change public IP address.

If this becomes a problem then we'll have to collate information and
present it to Google.



adrian


On Fri, Jan 25, 2008, Iain wrote:
 I'm noticing that users behind a transparent proxy are seeing the Google 
 403 page that presents the following message when attempting a search:
 
 *** START ***
 We're sorry...
 
 ... but your query looks similar to automated requests from a computer 
 virus or spyware application. To protect our users, we can't process 
 your request right now.
 
 We'll restore your access as quickly as possible, so try again soon. In 
 the meantime, if you suspect that your computer or network has been 
 infected, you might want to run a virus checker or spyware remover to 
 make sure that your systems are free of viruses and other spurious software.
 
 If you're continually receiving this error, you may be able to resolve 
 the problem by deleting your Google cookie and revisiting Google. For 
 browser-specific instructions, please consult your browser's online 
 support center.
 
 We apologize for the inconvenience, and hope we'll see you again on Google.
 *** END ***
 
 FWIW, the X-Forwarded-For headers are being sent.
 
 Looking around, it seems this is a common problem for people behind a 
 proxy. Has anyone found a work-around for it in Squid?

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Mem Cache flush

2008-01-28 Thread Adrian Chadd
On Mon, Jan 28, 2008, Chris Woodfield wrote:
 This does bring an interesting question - is it possible to give squid  
 *too much* memory?
 
 My theoretical setup would be an uber-box (32GB RAM, multi-TB of disk)  
 running 64-bit squid and with mem_cache set to something in the  
 25-30GB range (as high as we can without swap risk), with a  
 maximum_object_size_in_memory in the multiple MB; we want to  
 effectively cache as much as possible in memory as opposed to disk.  
 Squid and associated utilities will be the only thing running on the  
 box.
 
 Does this make sense, or is a more balanced approach re: squid  
 cache_mem vs. kernel page cache allocation going to provide better  
 performance?

Squid's great at keeping small objects in memory, but not large objects.
Its been a known problem for a while. I've got a dotpoint to fix it in
Squid-2.X; 3.x has a different memory object cache organisation which
will need to be revisited once I've got some more experience with memory
caching in 2.x.




Adrian



Re: [squid-users] cache disk failure handling?

2008-01-28 Thread Adrian Chadd
Squid will probably crash.

RAID1 is an acceptable comprimise and may improve IO throughput
slightly.

I've got a goal to get some alternate storage code going in the next
6 to 12 months which will make a future codebase handle this sort
of situation better.


Adrian

On Mon, Jan 28, 2008, Chris Woodfield wrote:
 Hi,
 
 Reading the squid FAQ, it's obvious to me that putting cache_dirs on a  
 RAID (particularly RAID5) has serious performance penalties and is  
 highly discouraged. However, what's not as clear is how squid deals  
 with single-disk failures and whether or not it handles failures  
 gracefully enough to obviate the need for RAID.
 
 If I have a squid running multiple cache_dirs on single disks, and one  
 disk suffers a failure, how does squid respond? Will it simply stop  
 using that cache_dir and soldier on, or can this cause an application  
 crash?
 
 Also, when starting up squid, what is the effect of an unavailable  
 cache_dir? I'm thinking of a situation where squid is restarted before  
 a bad disk can be replaced.
 
 If squid does have problems here, could using pairs of RAID1  
 partitions be an acceptable compromise, with the cost of reduced total  
 storage?
 
 Thanks,
 
 -Chris
 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid.conf deleting host...

2008-01-28 Thread Adrian Chadd
* Which version of Squid?
* Whats your http_port look like?


On Mon, Jan 28, 2008, Sherwood Botsford wrote:
 I've put my foot in my mouth up to about the knee.
 
 Somehow in an edit squid.conf now does something very odd:
 
 If I look up http://wiki.squid-cache.org/FrontPage
 
 I get an error:
 
 The requested URL could not be retrieved
 
 While trying to retrieve the URL: /FrontPage
 
 The following error was encountered:
 Invalid URL
 
 
 Looking in cache.log:
 1201566150.964  0 polaris.sjsa.internal.net TCP_DENIED/400 
 1475 GET /FrontPage - NONE/- text/html
 
 
 What have I done to delete http//hostname

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Patching Squid 2.6 icap patch with Squid-2.6.STABLE10 - problem

2008-01-28 Thread Adrian Chadd
On Tue, Jan 29, 2008, Amos Jeffries wrote:

 I had Adrian benchmark 3.x recently. With his specific RAM-pathways test.
 
 The cutoff for speed seems to be Squid3 reaching 500-650 req/sec and 
 Squid 2.6 going past that into the 800-900 req/sec ranges. At a few 
 hundred concurrent requests.

http://squidproxy.wordpress.com/ has the graph if anyone is interested.
Thats on a PIII-Celeron 633mhz box with 128KB of L2 cache.

I've got some munin graphs showing Squid-2.6, Squid-2.7, Squid-2.HEAD and my
hacking branch (s27_adri) which will appear in HEAD as time/funding permits;
I've got s27_adri handling a non-disk workload of ~250 req/sec across a LAN
with about 15% CPU free. Squid-2.6, 2.7 and 3.0 all max out at about 180-200
req/sec on the same test.

Unfortunately its a bit tetchy - if the concurrent client count grows above
about 800 clients, the CPU usage spikes to full, concurrent client count
sits at about 3000 and the service time increases to ~1.5 seconds. Squid-2.X
and 3.X do this straight away. Not much can be done about this besides
making the code more efficient (or buying more hardware, but that peaks out
after a certain point too.)

So if you're running ~100 req/sec, Squid-2.6, 2.7 and 3.0 will be fine for you.
If you're wanting to push things slightly harder (say, 1000 req/sec), come see 
me.



Adrian



Re: [squid-users] cache seems to fail - live.com and evga.com

2008-01-29 Thread Adrian Chadd
Would you file a bugzilla bug with the details?


Thanks!

Adrian

On Tue, Jan 29, 2008, Dave Overton wrote:
  
  Using transparent proxy stuff, in freebsd/wccp2 setup.
  
  http://gallery.live.com just sits there
  http://www.evga.com just sits there.
  
  Other live.com sites work, that one doesn't.  The evga.com 
  thing is new today, can anyone else with a working 3.0 server 
  see if they have problems with these sites?
  
  Thanks in advance for your help.
 
 Following up.  
 
 Uninstalled 3.0 Stable1, installed 2.6 stable 18, same configs,
 same everything else, and it works fine.  gallery.live.com opens normally,
 as does the evga site.
 
 Humm...
 
 Dave



-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] squid zero sized reply and invalid response when accessing google

2008-01-29 Thread Adrian Chadd
The best place to start is to throw this into a bugzilla ticket
so it doesn't get lost.

On Tue, Jan 29, 2008, faidzul eazam wrote:
 hi all
 i'm using squid 2.6.18 on freebsd 6.2 . before this squid run smoothly
 but starting yesterday there are certain websites that i cant access
 and keep giving me zero sized reply and when i try upgrade it to 3.0 i
 keep getting invalid response. and website that i browse is
 www.google.com. on the access log show TCP_MISS/502.
 
 can anyone help me on this?

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Port 0 Behaviour.

2008-01-29 Thread Adrian Chadd
Hm, weird! Can you please throw this in a bugzilla ticket?

Thanks,


Adrian

On Tue, Jan 29, 2008, V?ctor J. Hern?ndez G?mez wrote:
 Hi all,
 
 we have a squid2.5STABLE14 on Linux up and working wonders for plenty of 
 time, and have found today our first problem.
 
 Looking at the log (access.log) we can see the following line:
 
 Jan 28 12:41:19 [EMAIL PROTECTED] squid_a: 1201520479.040  0
 192.168.1.51 TCP_DENIED/400 1568 POST
 http://193.147.185.27:0/HZTunnel/4823/479dbf5e2f3 - NONE/- text/html
 
 Denying access to !!port 0!! I have been investigating on squid-user 
 mailing list itself and found a thread with an interesting issue:  
 fastening squid to a port: problem. This thread has let me learn that 
 browsers send requests to port 80 when URLs are those of the type 
 http://host.domain:0/ This works not only when requests are directed to 
 a conventional web server, but also when requests are directed to the 
 proxy (that is, the browser itself strips the 0 part of the URL 
 transforming the URL to a canonized form -or similar behaviour-).
 
 However, we are now using sort of a web client for virtual learning 
 which seems to have its own way for doing client connections... 
 Strangely when the client is not using the proxy, it works perfectly but 
 when it is using it, squid denies the access (see log above).
 
 I have added acl Safe_port 0 to squid.conf just in case. Any idea of 
 what to do to bypass this problem?
 
 Thank you in advance,
 
 -- 
 V?ctor J. Hern?ndez G?mez
 Centro de Inform?tica y Comunicaciones
 Universidad Pablo de Olavide, de Sevilla
 95/4349258  - [EMAIL PROTECTED]
 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Web service failing through squid

2008-01-29 Thread Adrian Chadd
2.5 isn't really supported by anyone anymore; I'd suggest upgrading to
the latest 2.X (which is 2.6.STABLE18 atm) or 3.X (3.0.STABLE1) releases
and see if it works.



Adrian

On Tue, Jan 29, 2008, Gavin Hamill wrote:
 Hi,
 
 We have a 2.5.12 installation on Ubuntu dapper which we're having
 problems using with a remote SOAP web service. Squid has been happily
 chugging away for over a year for our busy LAN, and this is the first
 issue we've had with it.
 
 In short, squid passes the SOAP request from the LAN client to the
 service, and squid receives a reply from said service, but squid then
 passes an empty document back to the client on the LAN.
 
 I've made a tcpdump capture available at http://bum.net/squid.cap
 
 LAN client = 10... , Squid IP = 194..., Remote Service = 212...
 
 There are two packets with invalid checksums - I used a hex editor to
 remove our user/password from the data stream.
 
 I notice that HTTP/1.0 has been requested by the client, yet it
 specifies a Host: header - but I can't imagine this is the cause of the
 issue. Any advice would be warmly welcomed!
 
 Cheers,
 Gavin.
 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Change squid process name

2008-01-30 Thread Adrian Chadd
Talk to your OS vendor support and see if they've got tools to limit
access to the process list to the processes running under your uid.
THen users can only see processes running under their uid, and won't
see Squid.



Adrian

On Wed, Jan 30, 2008, Richard wrote:
 Hello!
 
 I am running the current Squid Version on a Unix server, and don't want 
 other users to see that Squid is running!
 
 How can I prevent that Squid will be listed by tools like top?
 
 It is possible to change the process name?
 
 Please excause my bad english!
 
 Thank you!
 
 Best regards,
 
 Richard

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Refusing connections during squid -k reconfigure

2008-01-31 Thread Adrian Chadd
On Thu, Jan 31, 2008, Amos Jeffries wrote:

 If the delay between starting and stopping is long enough and can be 
 done with X + random-time offset. Squid will cope with some helpers 
 simply stopping and resumes them.

Yeah, the trouble is that you can't kill them all or Squid will die
complaining the helpers are dying too quickly.

Hm, I've always wanted to fix that pause during reconfigure and
rotate. Of course, reconfigure's deny accepting connections is
probably it closing and re-opening all its listen() sockets..



Adrian



Re: [squid-users] Cache for mp3 and ogg in memory...

2008-01-31 Thread Adrian Chadd
On Thu, Jan 31, 2008, Tek Bahadur Limbu wrote:

 Do you think that this ZFS file system scales better than current file 
 systems if used for caching such as Squid?
 
 Do you have any statistics?

I've got no statistics and I've not seen any reports from anyone who
has tried comparing ZFS to others.

ZFS has one unfortunate side-effect: its extraordinarily memory hungry
compared to traditional UNIX file systems. :/



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] Mem Cache flush

2008-01-31 Thread Adrian Chadd
On Thu, Jan 31, 2008, Chris Woodfield wrote:
 Interesting. What sort of size threshold do you see where performance  
 begins to drop off? Is it just a matter of larger objects reducing  
 hitrate (due to few objects being cacheable in memory) or a bottleneck  
 in squid itself that causes issues?

Its a bottleneck in the Squid code which makes accessing the n'th 4k
chunk in memory take O(N) time.

Its one of the things I'd like to fix after Squid-2.7 is released.



Adrian



Re: [squid-users] WCCP Support for SquidNT

2008-01-31 Thread Adrian Chadd
On Thu, Jan 31, 2008, Squid Dev wrote:
 Hi guys,
 
 I've seen some posts already (dated a while back) that there is no
 support as of yet for WCCP on SquidNT, due to the lack of
 implementation/integration of GRE on Windows.
 
 Is this still the case? if so, is there any sort of development
 towards a solution?

Packeteer acquired a company which sold a WAN accelerator product
which included Squid for Windows, and it included WCCPv2 support.

So its certainly possible, if someone wishes to write it up..



adrian



Re: [squid-users] Refusing connections during squid -k reconfigure

2008-01-31 Thread Adrian Chadd
On Fri, Feb 01, 2008, Amos Jeffries wrote:

  Hm, I've always wanted to fix that pause during reconfigure and
  rotate. Of course, reconfigure's deny accepting connections is
  probably it closing and re-opening all its listen() sockets..
 
 I've been thinking the same.

Yeah, and squid -k rotate's massive delay is because its writing out new
swap logs synchronously, and on a large proxy that is going to take a bloody
long time..




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] squid -k rotate restarts url_rewriters!

2008-01-31 Thread Adrian Chadd
On Thu, Jan 31, 2008, Chris Woodfield wrote:
 I just put a squid system with url_rewriter children into production.  
 Alongside this we have a script that regularly runs squid -k rotate,  
 then FTPs the log.1 files to a remote site for backup/processing.
 
 The issue I've noticed is that every time squid -k rotate is run,  
 squid also stops and restarts all of its url_rewriter children. While  
 this happens fairly quickly, it's a step I'd rather not have it do if  
 necessary, particularly under the load these boxes are expected to be  
 able to handle.
 
 Consider this a feature request (and if 3.0 does this already, please  
 let me know): do not restart url_rewriter children on a USR1 signal  
 (or whichever signal squid -k rotate sends in 2.6 STABLE18).

Well, I was thinking about splitting out some of the actions into
internal URLs rather than signals (as we're going to run out of
portable signals at some point) for various actions, including:

* rotate the normal logs, but not the swap logs
* rotate the swap logs
* gracefully restart the helpers
* re-load the ACL text files, but don't re-read the entire configuration

Adding these into Squid-2 and Squid-3 shouldn't be all that hard
(as practically the same calls are going to be made..)

Send me a private email if you'd like to discuss getting it done.


Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: AW: [squid-users] squid Version 2.6.STABLE16 crashing with url_rewriters

2008-02-01 Thread Adrian Chadd
You need to see where the url helper (squidguard?) is actually logging
to, and peruse its logs.



Adrian

On Fri, Feb 01, 2008, Goj, Dirk wrote:
 Hi.
 
 Ah ok. That's done automatically every night by a sheduled cron job. But made 
 it manually in addition but brings no results. Means the problem resits.
 
 Dirk
 
 
 Von: Cassiano Martin [mailto:[EMAIL PROTECTED]
 Gesendet: Freitag, 1. Februar 2008 12:06
 An: Goj, Dirk
 Betreff: Re: AW: [squid-users] squid Version 2.6.STABLE16 crashing with 
 url_rewriters
 
 You can regenerate the DBs issuing
 sudo squidGuard -C all
 
 and then restart squid. Squidguard will recreate the DB's
 
 []'s
 Cassiano Martin
 
 Goj, Dirk wrote:
 Hi.
 
 Thx for reply.
 
 -Urspr?ngliche Nachricht-
 Von: Cassiano Martin [mailto:[EMAIL PROTECTED]
 Gesendet: Freitag, 1. Februar 2008 10:58
 An: Goj, Dirk
 Betreff: Re: [squid-users] squid Version 2.6.STABLE16 crashing with 
 url_rewriters
 
 Goj, Dirk wrote:
 
 Hi there.
 
 Yesterday my proxy started crashing with following error message:
 
 2008/01/31 14:04:12| Starting Squid Cache version 2.6.STABLE16 for 
 i386-debian-linux-gnu...
 2008/01/31 14:04:12| Process ID 22919
 2008/01/31 14:04:12| With 1024 file descriptors available
 2008/01/31 14:04:12| Using epoll for the IO loop
 
 ##snip
 
 FATAL: The url_rewriter helpers are crashing too rapidly, need help!
 
 Squid Cache (Version 2.6.STABLE16): Terminated abnormally.
 CPU Usage: 0.580 seconds = 0.260 user + 0.320 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
 total space in arena:   16272 KB
 Ordinary blocks:16186 KB 11 blks
 Small blocks:   0 KB  0 blks
 Holding blocks:   504 KB  2 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  85 KB
 Total in use:   16690 KB 99%
 Total free:85 KB 1%
 
 When I restart squid from webmin or from ssh the same thing happens. Changed 
 nothing at squid itself. Search the internet and found one thing that the 
 helper (squidGuard) may have a to big logfile. But it's not the logfile. When 
 I disable the helper squidGuard squid runs normally. It's the squidGuard 
 which hasn't changed too... access rights to squidguard-db's are ok. Every 
 folder and file same user like the squid is running under...
 
 I'm out of ideas...
 
 May anyone of you have ideas ?
 
 Thx!
 
 
 Dirk Goj
 
 
 
 
 
 
 
 
 Its related with squidguard, take a look at its db. something is making
 it crash. have you scanned the squidguard db's with db_verify?
 
 
 Yes that's my thinking but I can't find anything... How do I verify the 
 squidguard db's ?
 
 Die in dieser E-Mail enthaltenen Informationen sind moeglicherweise 
 vertraulich und koennen von rechtlicher Bedeutung sein. Diese Mail ist 
 ausschliesslich f?r den Adressaten bestimmt und jeglicher Zugriff durch 
 andere Personen ist nicht zulaessig. Falls Sie nicht der beabsichtigte 
 Empfaenger sind, ist jegliche Veroeffentlichung, Vervielfaeltigung, 
 Verteilung oder sonstige in diesem Zusammenhang stehende Handlung untersagt 
 und unter Umstaenden ungesetzlich. Sollten Sie nicht der vorgesehene 
 Empfaenger dieser E-Mail sein, informieren Sie bitte den Absender und 
 loeschen Sie sie danach.
 
 This message contains confidential information and is intended only for the 
 individual named. If you are not the named addressee you should not 
 disseminate, distribute or copy this e-mail. Please notify the sender 
 immediately by e-mail you have received this e-mail by mistake and delete 
 this e-mail from your system.
 
 As you know, messages sent by e-mail can be manipulated by third parties. For 
 this reason our e-mail messages are generally not legally binding. This 
 message (and any files transmitted with it) may contain confidential and/or 
 privileged material. The information transmitted is intended only for the 
 person or entity to which it is addressed.
 If you have received this e-mail in error please notify the sender or the 
 system manager immediately by reply e-mail and delete this message and any 
 attachments. Any review, disclosure, copy, distribution or other use of 
 contents of this message by persons or entities other than the intended 
 recipient is prohibited.

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] cache clusters

2008-02-02 Thread Adrian Chadd
On Sat, Feb 02, 2008, J. Peng wrote:
 Hello,
 
 How to config cache clusters in squid 2.6?  ie, the parent/sisters
 caches for web reverse proxy.
 Is there any document or howto? thanks.

Have you checked out the Squid FAQ and other documentation in the wiki?
http://wiki.squid-cache.org/SquidFaq/ 



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


  1   2   3   4   5   6   7   8   9   10   >