On Fri, Jan 2, 2009 at 5:13 PM, Dean Weimer dwei...@orscheln.com wrote:
Here's what I have, anyone have an idea where I went wrong
I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
No_cache
Tony,
On Mon, Jun 16, 2008 at 7:31 PM, Anthony Tonns [EMAIL PROTECTED] wrote:
Did you ever find a resolution to this issue? I'm running a very similar
config and running into very similar problems - only on more servers
using more memory and the RHEL squid package on CentOS 5 x86_64. Same
Hi squid-users,
We recently experienced a problem on our new Squid setup (2 Squid
servers configured as reverse proxy - mostly the same configuration as
before except we allocated more memory and disk on the new servers -
the old boxes didn't have this problem). After 2 weeks of very good
On Wed, Feb 27, 2008 at 12:22 AM, leongmzlist [EMAIL PROTECTED] wrote:
What is the swap usage? I once had the same problem w/ squid
degrading over time. I had to reduce the cache_mem from 2GB to
512MB, and reduce the amount of objects in the cache since the index
was growing too big.
I
On Jan 11, 2008 1:02 AM, Adrian Chadd [EMAIL PROTECTED] wrote:
If someone coded up a much more flexible version which used maps and lists
to turn things like Accept-Encoding: into a canonicalised form then
I'd be extremely happy.
I'm not sure it's the good approach to consider this problem and
On Jan 11, 2008 1:02 AM, Adrian Chadd [EMAIL PROTECTED] wrote:
Works for them. :) Something to fix that sort of thing may appear
sometime in the future but it depends on interest and funding.
Btw, the patch posted on Domas' blog doesn't work as it doesn't filter
the Vary headers.
Moreover as I
On Jan 10, 2008 3:02 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
I've brainstormed with the Wikipedia admins and they've got a hack they
use somewhere to work around it, but its too specific to include into
Squid in that state.
Any patch I could take a look at? I'm not againt running a patched
On Jan 10, 2008 4:49 PM, Guillaume Smet [EMAIL PROTECTED] wrote:
I found this patch referenced on Google:
http://p.defau.lt/?C9GXHJ14GWHAYK1Pf0x9cw
I don't know where it comes from, the paste site is the only reference I have.
Just FYI I found the source of this patch:
http://dammit.lt/2007/12
On Jan 11, 2008 12:41 AM, Adrian Chadd [EMAIL PROTECTED] wrote:
Yup, and domas supplied a patch to dodge the behaviour but its too specific
to add into Squid.
Totally agree, I just wanted to document it on the list. Do you see
any good reason for the second part of the patch which comments the
Hi all,
I'm currently debugging a problem and digging into the Squid cache. I
noticed that even with 2.6, Accept-Encoding: gzip, deflate used by
IE and Accept-Encoding: gzip,deflate used by Firefox are considered
2 different values and so pages are cached twice when using a Vary:
Accept-Encoding
Hi again,
While digging my Squid cache, I found the following cache object:
Date: Wed, 09 Jan 2008 01:38:17 GMT
Server: Apache/2.0.52 (CentOS)
X-Powered-By: PHP/4.3.9
Content-Length: 13128
Expires: Thu, 10 Jan 2008 00:31:59 GMT
Cache-Control: public, must-revalidate
Last-Modified: Wed, 09 Jan
On 6/6/07, Santiago Del Castillo [EMAIL PROTECTED] wrote:
if i set always_direct allow all it works. But the problem is that
this squid will be used as sibling :(
It's normal. I don't see any cache_peer in your configuration file.
--
Guillaume
On 6/6/07, Santiago Del Castillo [EMAIL PROTECTED] wrote:
Becasue i'm not setting as sibling right now. First i want to make it
work as virtual host reverse proxy. Once working i'll set it as
sibling squid.
You have to set a parent cache_peer anyway. Squid 2.6 is a bit
different than 2.5 for
Hi all,
I can't find any information in the documentation about the case
(in)sensitivity of hierarchy_stoplist? Can anyone confirm me if it is
or not?
Thanks.
--
Guillaume
On 5/31/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
tor 2007-05-31 klockan 17:20 +0200 skrev Guillaume Smet:
I can't find any information in the documentation about the case
(in)sensitivity of hierarchy_stoplist? Can anyone confirm me if it is
or not?
It's case sensitive.
Thanks Henrik
On 3/26/07, James Sutherland [EMAIL PROTECTED] wrote:
LFUDA should be a close approximation to the result the original
poster
wanted: anything getting hit only by a robot will still not be
'frequently'
used, so although it will be cached initially, it will soon be evicted
again.
Yes, it's a
On 3/26/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
One way is to set up a separate set of cache_peer for these robots,
using the no-cache cache_peer option to avoid having that traffic
cached. Then use cache_peer_access with suitable acls to route the robot
requests via these peers and deny
On 3/26/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
1. accept request
2. http_access etc
3. cache lookup, send response if a cache hit.
Yeah, I missed this step.
I can't find a no-cache option for cache_peer in squid 2.6-STABLE4.
proxy-only seems to be the option to use. Do you confirm?
On 3/24/07, Amos Jeffries [EMAIL PROTECTED] wrote:
This is a problem for your web server configuration then. Your cache and
others around the world can be expected to cache any content that they
are allowed to.
The best way to prevent this content being cached is for the originating
web server
Hi all,
We're using Squid as a reverse proxy for a couple of years now. We're
currently migrating to Squid 2.6 and we're really satisfied of all the
enhancements of this version, especially the fact that we can use a
lot of features previously reserved to the proxy configuration.
Our context is
On 3/23/07, Amos Jeffries [EMAIL PROTECTED] wrote:
Looks like a case for something like this that prevents the group
'robots' from retrieving data not already in the cache:
acl robots
always_direct deny robots
No, that's not what I want. It's not a problem for us that robots
index all
On 3/24/07, Chris Robertson [EMAIL PROTECTED] wrote:
If you want to cache these infrequently visited URLs when regular people
visit, add an acl that matches the user-agent of the bots.
acl CITY urlpath_regex city1 city2 city3
acl GOOGLEBOT browser -i googlebot
cache deny CITY GOOGLEBOT
It's
22 matches
Mail list logo