Nevermind this... Don't ask :(
On Sat, Oct 8, 2011 at 8:45 PM, david robertson wrote:
> Hello, I'm having a bit of an issue with CARP, specifically balancing the
> load.
>
> I have 3 frontend servers that cache only to memory, and 2 backend
> servers that cache only
Hello, I'm having a bit of an issue with CARP, specifically balancing the load.
I have 3 frontend servers that cache only to memory, and 2 backend
servers that cache only to disk (one aufs device, and one coss device
on each). The two backend servers are running on identical hardware,
and running
-enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=
Hello, I have a bit of an urgent issue - Squid is serving 400 errors,
and I'd like to avoid that. Ideally, we want Squid to serve the
object that it has in cache, instead of the 400. I have
stale-if-error=1800 in the headers, but squid is still serving a 400
whenever it gets it from the origin (w
l - I have no idea why I didn't
think of that before...
You're a genius, man. A genius.
On Wed, Nov 10, 2010 at 5:07 AM, Amos Jeffries wrote:
>
> Harping way back...
>
>>>>> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson wrote:
>>>>>>
>
9:27 PM, Amos Jeffries wrote:
>>> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson wrote:
>>>> I'm in the process of writing a script to give me some cache hit
>>>> statistics for my cluster. There's some confusion on the cache_object
>>>> i
Sorry:
Squid Cache: Version 2.7.STABLE9-20101104
The frontend servers only cache to memory, via
cache_dir null /dev/null
On Tue, Nov 9, 2010 at 9:27 PM, Amos Jeffries wrote:
> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson
> wrote:
>> I'm in the process of writing a scrip
I'm in the process of writing a script to give me some cache hit
statistics for my cluster. There's some confusion on the cache_object
info output, though. For example, this particular host only caches to
memory, however this is the output I get:
Request Hit Ratios: 5min: 40.0%, 60mi
This is what you're looking for:
# TAG: negative_ttltime-units
# Time-to-Live (TTL) for failed requests. Certain types of
# failures (such as "connection refused" and "404 Not Found") are
# negatively-cached for a configurable amount of time. The
# default is 5 minut
> What is your digest rebuild time set to?
> your cache_dir and cache_mem sizes?
> and your negative_ttl setting?
digest_rebuild_period 60 minutes
negative_ttl 1 minute
backends use a cache_dir of 20gb (8mb cache_mem)
frontends use a cache_mem of 2gb (no cache_dir)
> What do you get back when
Anyone have any ideas?
On Wednesday, November 3, 2010, david robertson wrote:
> Hello, I'm having a cache-digest related issue that I'm hoping someone
> here can help me with.
>
> I've got a few frontend servers, which talk to a handful of backend
> servers. Ev
Hello, I'm having a cache-digest related issue that I'm hoping someone
here can help me with.
I've got a few frontend servers, which talk to a handful of backend
servers. Everything is working swimmingly, with the exception of
cache digests.
The digests used to work without issue, but suddenly a
t-frame-pointer -funroll-loops -ffast-math
-fno-exceptions'
Linux server.domain.com 2.6.18-8.1.10.el5 #1 SMP Thu Aug 30 20:43:28
EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
On Thu, Oct 7, 2010 at 10:52 AM, david robertson wrote:
> Hello, I know this isn't specifically a squid thing,
Hello, I know this isn't specifically a squid thing, but I think it
might be semi-related.
I've currently got a Dell 6850 (16gb ram, 16 logical processors)
server set up, based on the 'one frontend, two backends' example on
squid-cache.org. Everything will be fine, but once the cache starts
getti
Thanks Leonardo, I have everything working as required :)
On Fri, Aug 13, 2010 at 11:32 AM, Leonardo Rodrigues
wrote:
>
> i believe you can do it and the topics/wiki articles about youtube
> caching should give you interesting points about that.
>
> Em 13/08/2010 12:17,
Hello, I have a question concerning the caching of specific URLs:
I'm currently using squid in an accelerator config, and everything is
working perfectly fine. However I've just been given a request to
ignore part of a URL when it comes to caching. For example:
http://domain.com/v/subdir/subdir/
t has infact been updated), or will
all new requests immediately be served the new, updated object?
2010/8/2 Henrik Nordström :
> sön 2010-08-01 klockan 11:52 -0400 skrev david robertson:
>> On Sun, Aug 1, 2010 at 1:12 AM, Amos Jeffries wrote:
>> > If stampeeding is a worry the s
On Sun, Aug 1, 2010 at 1:12 AM, Amos Jeffries wrote:
> If stampeeding is a worry the stale-if-error and stale-while-revalidate
> Cache-Control: options would also be useful (sent from the origin web
> server). These are supported by 2.7.
Question - why aren't these options documented anywhere? A
Squid 2.x supports this:
# TAG: collapsed_forwarding(on|off)
# This option enables multiple requests for the same URI to be
# processed as one request. Normally disabled to avoid increased
# latency on dynamic content, but there can be benefit from enabling
# this in a
19 matches
Mail list logo