I've been testing quite a few combination so for this I had hyperthreading
disabled in the BIOS, and SMP enabled on BSD - so 2 processors. If I enable
hyperthreading its 4.
I understand that squid would favour only one processor, yet with SMP on it
lasts 3x longer. My guess would be that's its be
On ons, 2007-11-14 at 19:09 +0530, Manu Garg wrote:
> Here is my problem:
>
> I have a cache server at location X: cache.X. This server peers up
> with cache servers at location Y and Z:
> cache1.Y,
> cache2.Y,
> cache1.Z,
> cache2.Z.
>
> I want cache.X to talk to cache[12].Y in round robin manne
On ons, 2007-11-14 at 06:14 +, Ed Singleton wrote:
> However, when I try to access the address I get this error:
>
> "Access Denied. Access control configuration prevents your request
> from being allowed at this time."
So you reached Squid, but Squid didn't know where or how to forward the
On tis, 2007-11-13 at 20:39 -0700, murrah boswell wrote:
> I have not fired up my scripts to trigger wget yet. I have been testing
> by grabbing a few Web pages using a browser and logged into Squid
> environment as user 'wget.' I am baby stepping my way through this, so I
> want to get the Squ
On ons, 2007-11-14 at 14:29 +0200, Dave Raven wrote:
> Will do - I'll setup polymix-4 tomorrow and try starting on a full
> cache. Something interesting though - my processor usage never really gets
> over 50% or so (SMP or single processor) until it crashes; but with SMP
> 800RPS lasts 200+
On ons, 2007-11-14 at 11:49 +1100, Mark Nottingham wrote:
> I'd like to double-check the semantics of read_ahead_gap.
>
> AIUI, Squid will buffer up to that much data on both requests and
> responses, in addition to the TCP send and receive buffers.
responses only.
> So, if I have (for the sak
On Wed, Nov 14, 2007, Jason Gauthier wrote:
> > Good-o. Care to share your WCCP + ASA setup so I can put it into the
> > Squid Wiki?
>
> Adrian, I was able to pull off the working config from the wiki :) Job
> well done!
Cool!
> Turn out I can do this. But I have to choose between authenticat
> > I asked some generic questions earlier in the week and got some
> great
> > documentation. This has led me to a working WCCP/Squid
> implementation.
> > I thank you.
>
> Good-o. Care to share your WCCP + ASA setup so I can put it into the
> Squid Wiki?
Adrian, I was able to pull off the
i would like to filter url but as all of you know that there are lot of
sites that prodies text box to access those band webistes.
http://www.anonymouse.org and other. is it possible to block only my
required websites, because anonymouse proxy/websites are daily updated.
On Tue, Nov 13, 2007, Jason Gauthier wrote:
> All,
>
> I asked some generic questions earlier in the week and got some great
> documentation. This has led me to a working WCCP/Squid implementation.
> I thank you.
Good-o. Care to share your WCCP + ASA setup so I can put it into the
Squid Wiki?
On Wed, Nov 14, 2007, Scott Anctil wrote:
> First my access.log file grows about 200 MB/Hr. This means I reach the
> max file size of 2GB in about 10 hours. I know that I can rotate the
> logs within the 10 hours to solve this but is there a better solution?
./configure --with-large-files
> The
Chris, Adrian and Amos, Thanks for your help CPU is now running 1% - 40%
average supporting 22,000 users.
Things seem to be running well for the most part. I have a two
additional concerns.
First my access.log file grows about 200 MB/Hr. This means I reach the
max file size of 2GB in about 10 ho
On Wed, Nov 14, 2007, Tek Bahadur Limbu wrote:
> Also your FreeBSD version 4.x might have also made the difference!
Its entirely possible - you have to remember that FreeBSD-4.x only
allows one process to be in "kernel space" at one time; its entirely
possible that avoids various race conditions
On Wed, Nov 14, 2007, Andr? Jee wrote:
> Dear all,
>
> I'm having timeout issues when using a webiste's "search function". I'm
> sending a query and I'm expecting
> an answer in return. Query's that takes less than 60 seconds seems to be
> fine, but I don't get an answer if the
> query takes mo
Dear list,
We have a squid proxy servers with ACL filters: Unauthenticated users
can only surf a restricted list of sites.
Users ho want to surf to all sites need to know the login+password.
The problem is now that for many sites who are loading content from
other sites (eg. Yahoo) users need to
Amos Jeffries wrote:
>I have a feeling we say you a while earlier. yes?
Yup.
>Does the OWA server really respond to "office-pc39:11994/exchange" normally?
No, the office-pc39 is supposed to listen on port 11994 and redirect everything
to the owa server (internal IP: 300.200.80.254) as a transp
Hey Emiliano,
I have not had much to do with rapidshare, but IFF you can pin down the
servers and requests from a given source. You should be able to setup your
squid to have a rapidshare server as one of the cache_peer pre-configured
with login details and ACL to forward all rapidshare requests
On Nov 14, 2007 7:09 PM, Manu Garg <[EMAIL PROTECTED]> wrote:
> Here is my problem:
>
> I have a cache server at location X: cache.X. This server peers up
> with cache servers at location Y and Z:
> cache1.Y,
> cache2.Y,
> cache1.Z,
> cache2.Z.
>
> I want cache.X to talk to cache[12].Y in round rob
Dear all,
I'm having timeout issues when using a webiste's "search function". I'm
sending a query and I'm expecting
an answer in return. Query's that takes less than 60 seconds seems to be
fine, but I don't get an answer if the
query takes more than 60 seconds to perform. IE/FF does not show an
Here is my problem:
I have a cache server at location X: cache.X. This server peers up
with cache servers at location Y and Z:
cache1.Y,
cache2.Y,
cache1.Z,
cache2.Z.
I want cache.X to talk to cache[12].Y in round robin manner as long as
they are accessible. Peering should failover to cache[12].Z
Hi Adrian,
Will do - I'll setup polymix-4 tomorrow and try starting on a full
cache. Something interesting though - my processor usage never really gets
over 50% or so (SMP or single processor) until it crashes; but with SMP
800RPS lasts 200+ minutes, and without it only 80 minutes...
Than
Hi Tek,
I've had to make several modifications to the standard setup to get it
to handle the actual requests coming in, the cache (without disks) is able to
maintain around 1800RPS now - of course I don't expect the disks to ever get
that high.
I'm running 4.11, the relevant kernel twe
Yes. Although your setup behaves better under high load for longer. I
stopped using Diskd myself because of bug #761. although I must admit
that I had not experienced any issues on my servers when I was using
it.
Maybe one of the developers on the list can clarify. Is it the case
that diskd crash
Hi Dave,
Dave Raven wrote:
I have seen the error messages before, but not during these tests. diskd
definitely seems to delay the time-till-crash by a lot - as I understand it the
problems in diskd are crashes under high load, not that it slows it down right?
From my experience, YES, DISKD c
What you may need to do is run the tests at lower req/sec's to find out where
its "stable"; or actually run Polmix-4 up properly.
Disk caches - UFS to a large extent, COSS somewhat - take a while to reach
a 'steady state'. With UFS (which I think you're using here, right?) you end
up initially lay
On Nov 14, 2007 10:42 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Ed Singleton wrote:
> > I'm trying to set up squid as a web accelerator behind apache for a
> > couple of slow dynamic sites I have.
>
> Well, first trouble is that accelerators should be in _front_ of the web
> server. Apache has
I have seen the error messages before, but not during these tests. diskd
definitely seems to delay the time-till-crash by a lot - as I understand it the
problems in diskd are crashes under high load, not that it slows it down right?
Thanks for the help
Dave
-Original Message-
From: [EMA
>
> dns_nameservers - if set the NS listed _ALL_ resolve the IPA to
> 123.123.123.123
> - if not set the /etc/resolv.conf NS _ALL_ do the same.
Thanks Amos,
You were right: dns_nameservers had a different set of nameservers than
/etc/resolv.conf! I completely overlooked that directive, thi
Ed Singleton wrote:
I'm trying to set up squid as a web accelerator behind apache for a
couple of slow dynamic sites I have.
Well, first trouble is that accelerators should be in _front_ of the web
server. Apache has perfectly fine caching internally for cachable
content. All the benefit from
J Beris wrote:
Hello list,
I'm seeing a very odd thing with one website, something which I can't
explain at all. It only happens with Squid, if I bypass Squid everything
works as normal.
We are trying to access a website: example.com.
This domain name is resolvable both on the Internet and on o
Hello list,
I'm seeing a very odd thing with one website, something which I can't
explain at all. It only happens with Squid, if I bypass Squid everything
works as normal.
We are trying to access a website: example.com.
This domain name is resolvable both on the Internet and on our
nationwide WAN
31 matches
Mail list logo