Is that per-flow, or in total?
Adrian
2008/11/24 Ken DBA [EMAIL PROTECTED]:
Hello,
I was just finding the flow capacity for Squid is too limited.
It's even hard to reach an upper limit of 150 MBits.
How can I improve the flow capacity for Squid in the reverse-proxy mode?
Thanks in
Gah, they way they work is really quite simple.
* ufs does the disk io at the time the request happens. It used to try
using select/poll on the disk fds from what I can gather in the deep,
deep dark history of CVS but that was probably so the disk io happened
in the next IO loop so recursion was
2008/9/29 Amos Jeffries [EMAIL PROTECTED]:
Squid-2 has issues with handling of very large individual files being
somewhat slow.
Only if you have an insanely large cache_mem and
maximum_object_size_in_memory setting. Very large individual files on
disk are handled just as efficiently across
proxies
that achieve much more than 150mbit/sec, even considering the
shortcomings of the codebases, I can't help but think there's
something else going on that isn't specifically Squids' fault. :)
Adrian
2008/11/28 Ken DBA [EMAIL PROTECTED]:
--- On Thu, 11/27/08, Adrian Chadd [EMAIL
Heh. The best way under unix is a hybrid of threads and epoll/kqueue
w/ non-blocking socket IO.
Adrian
2008/11/28 Ken DBA [EMAIL PROTECTED]:
--- On Sat, 11/29/08, Adrian Chadd [EMAIL PROTECTED] wrote:
From: Adrian Chadd [EMAIL PROTECTED]
Considering people have deployed Squid forward
Does Squid-2.7.STABLE5 exhibit this issue?
Adrian
2008/11/28 Marcel Grandemange [EMAIL PROTECTED]:
Looks like squid broke itself again.
If anybody could advise me as to what's happening here it would be great.
Im thinking the move to v3 has been disasterous so far.
Every time is now use
Good detective work! I'm not sure whether this is a requirement or
not. Henrik would know better.
Henrik, is this worthy of a bugzilla report?
adrian
2008/11/30 Itzcak Pechtalt [EMAIL PROTECTED]:
Hi
I found some inefficiency in Squid TCP connection handling toward servers .
In some cases
Things have changed somewhat since that algorithm was decided upon.
Directory searches were linear and the amount of buffer cache /
directory name cache available wasn't huge.
Having large directories took time to search and took RAM to cache.
Noone's really sat down and done any hard-core
Someone may beat me to this, but I'm actually proposing a quote to a
company to implement quota services in Squid to support stuff just
like what you've asked for.
I'll keep the list posted about this. Hopefully I'll get the green
light in a week or so and can begin work on implementing the
2008/12/5 Nyamul Hassan [EMAIL PROTECTED]:
Thx for the response Adrian. Earlier I was using only AUFS on each drive,
and the system choked on IOWait above 200 req/sec. But, after I added COSS
in the mix, it improved VASTLY.
Well, thats why its there, right? :)
Since you're the COSS
rebuild it.
Thanks Regards,
Kaustav
- Original Message
From: Adrian Chadd [EMAIL PROTECTED]
To: Kaustav Dey Biswas [EMAIL PROTECTED]
Cc: Squid squid-users@squid-cache.org
Sent: Saturday, 6 December, 2008 12:28:10 AM
Subject: Re: [squid-users] How to interrupt ongoing transfers
Its a hack which is done to defer a storage manager transaction from
beginning whilst another one is in progress for that same connection.
I'd suggest using your OS profiling to figure out where the CPU is
being spent. This may be a symptom, not the cause.
adrian
2008/12/7 Bin Liu [EMAIL
several days ago
(http://www.squid-cache.org/mail-archive/squid-users/200811/0647.html),
which mentions that some operations may block under FreeBSD. So could
that cause this problem?
Thanks again.
Regards,
Liu
On Tue, Dec 9, 2008 at 23:28, Adrian Chadd adr...@squid-cache.org wrote:
Its
2008/12/17 Mark Kent mk...@messagelabs.com:
I tried running under valgrind, and it found a couple of leaks, but I'm
not sure that those are strictly the problem. If it were a traditional
memory leak, where memory was just wandering off, I don't quite see why
the CPU would climb along with the
Nope, I don't think the storeurl-rewriter stuff was ever integrated into ICP.
I think someone posted a patch to the squid bugzilla to implement this.
I'm happy to commit whatever people sensibly code up and deploy. :)
Adrian
2008/12/18 Imri Zvik im...@bsd.org.il:
Hi,
I'm using the
The one thing I've been looking to do for other updates is to
post-process store.log and find URLs which have been partial-replied
to (206) ending in various extensions, then queuing entire file
fetches of them to make sure they fully enter the cache.
Its suboptimal but it seems to work just
Thanks. Be sure to comment on the bugzilla ticket too.
Oh and tell me which bug it is so I can make sure I'm watching it. :)
Adrian
2008/12/23 Imri Zvik im...@bsd.org.il:
On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
Hi,
On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote
I'm still not sure whether the correct behaviour is to send ICP for
the rewritten URL, or to rewrite the URLs being received before
they're looked up.
Hm!
Adrian
2008/12/24 Imri Zvik im...@bsd.org.il:
On Wednesday 24 December 2008 17:01:39 Adrian Chadd wrote:
Thanks. Be sure to comment
No, I don't think it can.
I'm just wrapping up some changes to FreeBSD-current and my Squid fork
to support tproxy-like functionality under FreeBSD + ipfw.
Adrian
2009/1/7 Mehmet ÇELİK r...@justunix.org:
As per usual, the easiest fix is to re-write the web app properly.
The REMOTE_ADDR is
Hi guys,
Those of you who are using FreeBSD should have a look at squidstats.
Its based on Henrik's scripts to gather basic statistics from Squid
via SNMP and graph them. Its based on a googlecode project I created
and I'm also the port maintainer. So it should be easy for me to fix
bugs. :)
2009/1/15 Mark Powell m.s.pow...@salford.ac.uk:
Did you manage to get that FreeBSD 7 server working with COSS?
Well did you :)
Yes. At least in testing. I don't (yet) have a client running
FreeBSD-7 and using COSS.
This problem still exists in the latest squid. Any likelihood of a fix, or
Hi everyone,
Just letting you all know that I'm (slowly) tidying up and uploading
the various squid related tools that I've written over the years (the
ones I can find / release :) into another googlecode project.
The url is: http://code.google.com/p/squidtools/
There's not much there at the
hi everyone,
Someone posted a request here a few days ago for how to convince
squirm to use parts of a regular expression match in a rewritten
URL. This isn't a new request, and I've done it a bunch of times in my
own rewriters, but I figured I should get around to doing it in a
generic way so I
If it hasn't been swapped out to disk, the object has to stay in RAM
until the client(s) currently fetching from it have fetched enough for
part of the object (ie, the stuff at the beginning which has been sent
to clients) to be freed.
Adrian
2009/1/20 Taehwan Weon taehwan.w...@gmail.com:
.
In addition, at the dump time, squid had no client connection.
-Original Message-
From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf Of
Adrian Chadd
Sent: Wednesday, January 21, 2009 10:42 AM
To: taehwan.w...@gmail.com
Cc: squid-users@squid-cache.org
Subject: Re: [squid
2009/1/21 Amos Jeffries squ...@treenet.co.nz:
Yes it can. Squid's passing through of large objects is much more
efficient than its pass-thru of small objects. A few dozen clients
simultaneously grabbing movies or streaming through a CONNECT request can
saturate a multi-GB link network buffer
2009/1/22 Amos Jeffries squ...@treenet.co.nz:
How intensive is intensive? At the moment squid is averaging a mere 2.4%
processor time.
IIRC older Squid-2 had to step a linked-list the length of the object in 4KB
chunks to perform one of the basic operations (network write I think).
Yeah -
Hi all,
It's been a tough decision, but I'm resigning from any further active
role in the Squid core group and cutting back on contributing towards
Squid development.
I'd like to wish the rest of the active developers all the best in the
future, and thank everyone here for helping me develop and
it means they didn't bother investigating the problem and reporting
back to squid-users/squid-dev.
They may find that Squid-2.7 (and my squid-2 fork) perform a ton
better over whatever version they tried.
I'm trying to continue benchmarking my local Squid-2 fork against
simulated lots of
I'm giving my /dev/poll (Solaris 10) code a good thrashing on some
updated Sun hardware. I've fixed one silly bug of mine in 2.7 and
2.HEAD.
If you're running Solaris 10 and not using the /dev/poll code then
please try out the current CVS version(s) or wait for tomorrow's
snapshots.
I'll commit
Squid doesn't currently implement any smarts for the WCCPv2 return path.
Adrian
2009/5/6 kgardenia42 kgardeni...@googlemail.com:
On Fri, May 1, 2009 at 5:28 AM, Amos Jeffries squ...@treenet.co.nz wrote:
kgardenia42 wrote:
On 4/30/09, Ritter, Nicholas nicholas.rit...@americantv.com wrote:
Its a per-cache_dir option in Squid-2.7 and above; I'm not sure about 3.
Adrian
2009/5/20 Jason Spegal jspe...@comcast.net:
Just tested and verified this. At least in Squid 3.0 minimum_object_size
affects both memory and disk caches. Anyone know if this is true in 3.1 as
well? Any thoughts
Squid-2.HEAD has some internal rewriting support.
I'm breaking it out into a separate module in Lusca (rather than being
an optional part of the external rewriter) to make using it in
conjunction with the external URL rewriter possible.
Adrian
2009/6/26 Jeff Pang pa...@laposte.net:
Does
2009/6/26 Phibee Network Operation Center n...@phibee.net:
ok the bug are not resolved no ?
The bugs get resolved when someone contributes a fix.. :)
Adrian
2009/6/27 Chris Robertson crobert...@gci.net:
I'm running a strictly forward proxy setup, which puts an entirely different
load on the system. It's also a pretty low load (peaks of 160 req/sec at
25mbit/sec).
Just another random datapoint - I've just deployed my Squid-2
derivative (which is
Good writeup!
I'm rapidly coming to the conclusion that the problem with
transparency setups is not just a lack of documentation and examples,
but a lack of clear explanation and understanding of what is actually
going on.
I had one user try to manually configure GRE interfaces on the Cisco
side
2009/7/20 Mark Lodge mlodg...@gmail.com:
I've come across this at
http://wiki.squid-cache.org/Features/StoreUrlRewrite
Feature: Store URL Rewriting?
Does this mean i can cache videos without using videocache?
That was the intention. Unfortunately, people didn't really pick up on
the power
2009/6/30 Ronan Lucio lis...@tiper.com.br:
Could you tell what hardware do you use?
Reading Squid-Guide
(http://www.deckle.co.za/squid-users-guide/Installing_Squid) it says Squid
isn't CPU intensive, says a multiprocessor machines would not increase speed
dramatically.
Its a dual quad core
Upgrade to a later Squid version!
adrian
2009/6/30 goody goody think...@yahoo.com:
Hi there,
I am running squid 2.5 on freebsd 7, and my squid box respond very slow
during peak hours. my squid machine have twin dual core processors, 4 ram and
following hdds.
Filesystem Size
This won't work. You're only redirecting half of the traffic flow with
the wccp web-cache service group. The tproxy code is probably
correctly trying to originate packets -from- the client IP address to
the upstream server but because you're only redirecting half of the
packets (ie, packets from
I had specified how to implement proper quota support for a client -
but the project unfortunately fell through.
Its easy to hook into the end of a HTTP request and mark how much
bandwidth was used. The missing piece is a method of permitting
network access for users so they can't easily access
2009/7/14 Amos Jeffries squ...@treenet.co.nz:
Aha! duplicate syn-ack is exactly the case I got a good trace of earlier.
Turned out to be missing config on the cisco box.
Do you have an example of this particular (mis) configuration? The
note in the Wiki article isn't very clear.
The
2009/7/14 Amos Jeffries squ...@treenet.co.nz:
Do you have an example of this particular (mis) configuration? The
note in the Wiki article isn't very clear.
I don't. The admin only mentioned that by adding a bypass on service group
fixed the issue.
I had a tcpdump of as set of requests
2009/7/14 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
This is the latest support squid-2 version for RHEL5.3
An I want to use the dnsserver
Right. Well, besides the other posters' response about the cache peer
setup being a clue - you're choosing a peer based on source IP as far
as I
-cache.org
Betreff: AW: [squid-users] https from different Subnet not working
-Ursprüngliche Nachricht-
Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
Auftrag
von Adrian Chadd
Gesendet: Dienstag, 14. Juli 2009 11:16
An: Jarosch, Ralph
Cc: squid-users@squid
2009/7/16 Jamie Tufnell die...@googlemail.com:
We are talking files up-to-1GB in size here. Taking that into
consideration, would you still recommend this architecture?
On disk? Sure. The disk buffer cache helps quite a bit.
In memory ? (as in, the squid hot object cache; not the buffer
I was going to say; I'm tweaking the performance of a cache with 21
million objects in it now. Thats a bti bigger than 2^24.
2009/7/16 Henrik Nordstrom hen...@henriknordstrom.net:
tor 2009-07-16 klockan 14:29 +1200 skrev Amos Jeffries:
For you with MB-GB files in Squid-2 that changes to faster
2009/7/21 Soporte Técnico @lemNet sopo...@nodoalem.com.ar:
rep_mime_type can´t be used for parent selection because this is evaluated
before content has been reached ?
Correct.
Adrian
Just break on SIGABRT and SIGSEGV. The actual place in the code where
things failed will be slightly further up the callstack than the break
point but it -will- be triggered.
Just remember to ignore SIGPIPE's or you'll have a strangely failing Squid. :)
adrian
2009/7/21 Marcus Kool
2009/7/25 Amos Jeffries squ...@treenet.co.nz:
?? looks like your problem. Most of the web traffic you will ever see is
under 2 MB big.
Average size is somewhere between 32KB and 128KB depending on your clients.
Weird; my largest proxy customer with around 15,000 users or so now
behind one
2009/7/26 Jason Spegal jspe...@comcast.net:
I was able to cache Pandora by compiling with --enable-http-violations and
using a refresh_pattern to cache everything regardless. This however broke
everything by preventing proper refreshing of any site. If it could be
worked where violations only
This doesn't surprise me. They may be trying to maximise outbound
bits, or try to retain control over content, or not understanding
caching, or all/combination of the above.
I'd suggest contacting them and asking.
adrian
2009/7/26 Jason Spegal jspe...@comcast.net:
A little bit messy but
Change ufs to aufs - assuming you compiled in aufs.
Consider upgrading to Squid-2.7.STABLEx - I did a whole lot of little
performance tweaks between 2.6 and 2.7.
Learn about oprofile and submit some performance information to help
developers. :)
Adrian
2009/7/28 jotacekm
The donations were always few and far between. I'm not sure if there's
been any real active donations in the last twelve months; I think only
Duane knows.
Adrian
2009/8/2 Juan C. Crespo R. jcre...@ifxnw.com.ve:
Guys
Checking the site I found there is no donation from December 2008, or its
2009/8/2 Sachin Malave sachinmal...@gmail.com:
I have multicore processor here, I want to run squid3 on this
platform, Does squid support multithreading ? will it improve the
performance ?
None of the public Squid codebases currently support general
multithreading. There's some threading for
2009/8/2 smaugadi a...@binat.net.il:
Dear Adrian,
During the implementation we encountered issues with all kind of variables
such as:
Limit of file descriptors (now the squid is using 204800).
TCP port range was low (increased to 1024 65535) TCP timers (changed them)
The ip_conntrack and
: 304 MB in 3.01 seconds = 100.93 MB/sec
hdparm -T /dev/sdb1
/dev/sdb1:
Timing cached reads: 4192 MB in 2.00 seconds = 2096.58 MB/sec
Any ideas?
Regards.
Adrian Chadd-3 wrote:
2009/8/2 smaugadi a...@binat.net.il:
Dear Adrian,
During the implementation we encountered issues
Well, from what I've read, SSDs don't necessarily provide very high
random write throughput over time. You should do some further research
into how they operate to understand what the issues may be.
In any case, the much more important information is what IO pattern(s)
are occuring on your
a...@binat.net.il:
Well I'm seeing that the CPU is taking a lot of time waiting for outstanding
disk I/O request.
Adi
Adrian Chadd-3 wrote:
Are you seeing high IO wait CPU use, or high IO wait times on IO?
Adrian
2009/8/2 smaugadi a...@binat.net.il:
Dear Adrian,
Well my conclusion
2009/8/2 Heinz Diehl h...@fancy-poultry.org:
1. Change cache_dir in squid from ufs to aufs.
That is almost always a good idea for any decent performance under any
sort of concurrent load. I'd like proof otherwise - if one finds it,
it indicates something which should be fixed.
2. Format
Investigate tproxy
Adrian
2009/8/4 Ja-Ryeong Koo wjb...@gmail.com:
Hello,
I am writing this email to ask something regarding ways to hide Caching
Server IP address.
I have one apache server, one caching server (squid2.6.stable22).
(Client -- Caching Server (Reverse Proxy)
Is this still involving the videocache stuff?
If it is, why aren't you asking them?
Adrian
2009/8/4 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ mirz...@gmail.com:
(repost)
and how about caching online game patcher ? e.g ragnarok online, rohan
online, etc ?
is that use same method ?
and can anyone give me
0.50 0.00
0.00 175.50
09:34:04 AM 3 0.00 0.00 0.00 100.00 0.00 0.00 0.00
0.00 147.00
Well, some times you eat the bear and some times the bears eat you.
Do you have any more ideas?
Regards,
Adi.
Adrian Chadd-3 wrote:
2009/8/2 Heinz Diehl h...@fancy
2009/8/4 Hery Setiawan yellowha...@gmail.com:
maybe in his mind (and my mind too actually), with big mem_cache the
file transfer will be transferred faster. But using that big for me
it's too much, since I only have 4GB of RAM and thousand workstation
connect with my squid.
The squid memory
Have you asked the videocache group why it functions the way it functions?
adrian
2009/8/6 pavel kolodin pavelkolo...@gmail.com:
On Thu, 06 Aug 2009 05:34:09 -, Amos Jeffries squ...@treenet.co.nz
wrote:
Why?
Possible reasons:
1) 302 being the status you really want to use for
don't do that.
As someone who did this 10+ years, I suggest you do this.
* do some hackery to find out how your freeradius server stores the
currently logged in users. It may be in a mysql database, it may be
in a disk file, etc, etc
* have your redirector query -that- directly, rather than
The pipelining used by speedtest.net and such won't really get a
benefit from the current squid pipelining support.
Adrian
2009/8/15 Daniel sq...@zoomemail.com:
Henrik,
I added 'pipeline_prefetch on' to my squid.conf and it still isn't
working right. I've pasted my entire
Squid doesn't share memory or disk cache at the moment. It won't
share/slice filedescriptors the way you want them to.
I could probably write a unified logging hack so multiple squid
processes log to the same file via a single helper that handles
multiple pipes or something, one from each Squid.
Talk to the freebsd guys (eg me) about pmcstat and support for your
hardware. You may just need to find / organise a backport of the
particular hardware support for your platform. I've been working on
profiling Lusca with pmcstat and some new-ish tools which use and
extend it in useful ways.
Please create an Issue and attach the patch. I'll see about including it!
adrian
2010/1/6 Rajesh Nair rajesh.nair...@gmail.com:
Thanks for the response, Matt!
Unfortunately the cooperating HTTP service solution would not work
as I need to set the cookie for the same domain for which the
You need to set ulimit -n newlimit before you compile and
run Squid. Maybe you don't need to do it before compilation
these days, I forget.
ulimit -n 32768
check ulimit -a
Then ./configure
make
make install
put ulimit -n 32768 in the startup script
start squid
check cachemgr info page and/or the
On Thu, Jan 24, 2008, Dave Raven wrote:
Hi all,
Is it possible to make the request back out the router that sent in
a WCCP packet to begin with? For example if you have two routers, and router
A sends request A and router B sends request B to send them back through
their origin routers,
On Thu, Jan 24, 2008, Ryan Thoryk wrote:
I've got more information (on the FreeBSD side):
The packets are coming in over the GRE interface, but seem to be
randomly disappearing after the IPFW forward operation (forwards to
localhost:3128).
Here's the ipfw config:
00150 fwd
On Thu, Jan 24, 2008, Jason Taylor wrote:
I worked around that a few years ago by having multiple instances of
squid on my server, each with its own IP and dedicated squid.conf
Each router would connect to its own squid instance and linux policy
routing would determine the default gateway to
Yo'ure using UFS and small filedescriptor counts.
Recompile with 16384 filedescriptors and enable AUFS.
Adrian
On Fri, Jan 25, 2008, bijayant kumar wrote:
Hi Arana,
Thanks for your reply. As you are suggesting in your
reply that incresing the filedescriptor can be
dangerous. Is there any
Check ./squid -v ; see what it was compiled with.
Make sure you start squid after you change the default ulimit.
Either put ulimit -n 8192 at the top of the squid startup
script or find the place where your default ulimits are set
and modify that.
Adrian
On Fri, Jan 25, 2008, bijayant kumar
available
Please help me
--- Adrian Chadd [EMAIL PROTECTED] wrote:
Check ./squid -v ; see what it was compiled with.
Make sure you start squid after you change the
default ulimit.
Either put ulimit -n 8192 at the top of the squid
startup
script or find the place where your default
On Fri, Jan 25, 2008, bijayant kumar wrote:
Hi,
[EMAIL PROTECTED] ~ $ ulimit -n
1024
Thats your problem.
Check your pam security file. under ubuntu its:
[EMAIL PROTECTED]:~$ cat /etc/security/limits.conf | grep nofile
#- nofile - max number of open files
* hard
at this point to find out
what you can do to make sure the ulimit is properly extended.
Adrian
-- Adrian Chadd [EMAIL PROTECTED] wrote:
On Fri, Jan 25, 2008, bijayant kumar wrote:
Hi,
[EMAIL PROTECTED] ~ $ ulimit -n
1024
Thats your problem.
Check your pam security file. under
of squid was wrong, which
comes in gentoo. It was setting the filedescriptor
limit to 1024. I started it manually as suggested by
Adrian Chadd, and it started showing 8192 file
descriptors available. Once again thanx a ton.
Now I want a suggestion from you that is 8192 file
descriptor
On Fri, Jan 25, 2008, Rafael Donggon wrote:
Below is the config.log error message upon compiling..
I am using Ubuntu 7.10 Desktop Ed.
Make sure you install the meta-package build-essential, and
not individual packages.
It sounds like you're missing part of the development libraries.
Adrian
On Tue, Jan 22, 2008, Tory M Blue wrote:
Ideas? Yes 2.7, but trying to get an idea how stable folks think it
is, as it's been noted it has more http/1.1 functionality.
2.7 seems stable enough for the handful of people who have evaluated it.
The only real way to know if its stbale for you is,
I've only come across this in my own testing.
I'd suggest you find a support contact at google and provide your
details (at least the public IPs of your proxy servers) and see
what Google have to say.
There's no workaround that I can think of with Squid that doesn't involve
mapping users to
On Mon, Jan 28, 2008, Chris Woodfield wrote:
This does bring an interesting question - is it possible to give squid
*too much* memory?
My theoretical setup would be an uber-box (32GB RAM, multi-TB of disk)
running 64-bit squid and with mem_cache set to something in the
25-30GB range
Squid will probably crash.
RAID1 is an acceptable comprimise and may improve IO throughput
slightly.
I've got a goal to get some alternate storage code going in the next
6 to 12 months which will make a future codebase handle this sort
of situation better.
Adrian
On Mon, Jan 28, 2008, Chris
* Which version of Squid?
* Whats your http_port look like?
On Mon, Jan 28, 2008, Sherwood Botsford wrote:
I've put my foot in my mouth up to about the knee.
Somehow in an edit squid.conf now does something very odd:
If I look up http://wiki.squid-cache.org/FrontPage
I get an error:
On Tue, Jan 29, 2008, Amos Jeffries wrote:
I had Adrian benchmark 3.x recently. With his specific RAM-pathways test.
The cutoff for speed seems to be Squid3 reaching 500-650 req/sec and
Squid 2.6 going past that into the 800-900 req/sec ranges. At a few
hundred concurrent requests.
Would you file a bugzilla bug with the details?
Thanks!
Adrian
On Tue, Jan 29, 2008, Dave Overton wrote:
Using transparent proxy stuff, in freebsd/wccp2 setup.
http://gallery.live.com just sits there
http://www.evga.com just sits there.
Other live.com sites work, that one
The best place to start is to throw this into a bugzilla ticket
so it doesn't get lost.
On Tue, Jan 29, 2008, faidzul eazam wrote:
hi all
i'm using squid 2.6.18 on freebsd 6.2 . before this squid run smoothly
but starting yesterday there are certain websites that i cant access
and keep giving
Hm, weird! Can you please throw this in a bugzilla ticket?
Thanks,
Adrian
On Tue, Jan 29, 2008, V?ctor J. Hern?ndez G?mez wrote:
Hi all,
we have a squid2.5STABLE14 on Linux up and working wonders for plenty of
time, and have found today our first problem.
Looking at the log
2.5 isn't really supported by anyone anymore; I'd suggest upgrading to
the latest 2.X (which is 2.6.STABLE18 atm) or 3.X (3.0.STABLE1) releases
and see if it works.
Adrian
On Tue, Jan 29, 2008, Gavin Hamill wrote:
Hi,
We have a 2.5.12 installation on Ubuntu dapper which we're having
Talk to your OS vendor support and see if they've got tools to limit
access to the process list to the processes running under your uid.
THen users can only see processes running under their uid, and won't
see Squid.
Adrian
On Wed, Jan 30, 2008, Richard wrote:
Hello!
I am running the
On Thu, Jan 31, 2008, Amos Jeffries wrote:
If the delay between starting and stopping is long enough and can be
done with X + random-time offset. Squid will cope with some helpers
simply stopping and resumes them.
Yeah, the trouble is that you can't kill them all or Squid will die
On Thu, Jan 31, 2008, Tek Bahadur Limbu wrote:
Do you think that this ZFS file system scales better than current file
systems if used for caching such as Squid?
Do you have any statistics?
I've got no statistics and I've not seen any reports from anyone who
has tried comparing ZFS to
On Thu, Jan 31, 2008, Chris Woodfield wrote:
Interesting. What sort of size threshold do you see where performance
begins to drop off? Is it just a matter of larger objects reducing
hitrate (due to few objects being cacheable in memory) or a bottleneck
in squid itself that causes issues?
On Thu, Jan 31, 2008, Squid Dev wrote:
Hi guys,
I've seen some posts already (dated a while back) that there is no
support as of yet for WCCP on SquidNT, due to the lack of
implementation/integration of GRE on Windows.
Is this still the case? if so, is there any sort of development
On Fri, Feb 01, 2008, Amos Jeffries wrote:
Hm, I've always wanted to fix that pause during reconfigure and
rotate. Of course, reconfigure's deny accepting connections is
probably it closing and re-opening all its listen() sockets..
I've been thinking the same.
Yeah, and squid -k
On Thu, Jan 31, 2008, Chris Woodfield wrote:
I just put a squid system with url_rewriter children into production.
Alongside this we have a script that regularly runs squid -k rotate,
then FTPs the log.1 files to a remote site for backup/processing.
The issue I've noticed is that every
You need to see where the url helper (squidguard?) is actually logging
to, and peruse its logs.
Adrian
On Fri, Feb 01, 2008, Goj, Dirk wrote:
Hi.
Ah ok. That's done automatically every night by a sheduled cron job. But made
it manually in addition but brings no results. Means the problem
On Sat, Feb 02, 2008, J. Peng wrote:
Hello,
How to config cache clusters in squid 2.6? ie, the parent/sisters
caches for web reverse proxy.
Is there any document or howto? thanks.
Have you checked out the Squid FAQ and other documentation in the wiki?
http://wiki.squid-cache.org/SquidFaq/
1 - 100 of 1296 matches
Mail list logo