Hi there,
What particular headers are you trying to log?
e.g. Via: User-Agent: etc
Thanks,
tookers
Mario Remy Almeida wrote:
Hi All,
Squid Cache: Version 2.7.STABLE6
logformat headers %ts.%03tu %tg %a %rp [ %h ] %rm [ %h ]
access_log /var/log/squid/headers.log headers
Hmm, this works fine for me...
logformat custom ... %{User-Agent}h %h %a
I've tested on an i686 and Sparc based servers and this works fine.
Cheers
tookers
Mario Remy Almeida wrote:
Hi,
I wan to log all type of headers.
I have a similar rule but on i386 system same squid version
bergenp...@comcast.net wrote:
Is there a way to look at the object cache in squid and determine the
current freshness of the content?
I've got content in the squid cache where I would expect the content to
be a TCP_HIT. Looking in the squid access.log, I see the access to
the
server setting
which is (apprently) overriding the squid setting. Wondering what
knobs/tools exist within squid to see information about whether the
object is fresh or stale.
Thanks
tookers wrote:
bergenp...@comcast.net wrote:
Is there a way to look at the object cache in squid
tookers wrote:
Henrik Nordstrom-5 wrote:
mån 2009-10-05 klockan 08:10 -0700 skrev tookers:
Hi Henrik,
Thanks for your reply. I'm getting TCP_MISS/200 for these particular
requests so the file exists on the back-end,
Are you positively sure you got that on the first one
FacebookAccess facebook fbcdn_url
Thanks
Tookers
--
View this message in context:
http://www.nabble.com/Strange-issues-with-accessing-facebook-and-other-php-driven-sites-via-proxy-tp25807147p25852382.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Henrik Nordstrom-5 wrote:
mån 2009-10-05 klockan 08:10 -0700 skrev tookers:
Hi Henrik,
Thanks for your reply. I'm getting TCP_MISS/200 for these particular
requests so the file exists on the back-end,
Are you positively sure you got that on the first one? Not easy to tell
unless
this
Change http_access deny bad_url workdays
To... http_access deny our_networks bad_url workdays
It should match any source IP address and if the other 2 acls match then you
should get 'Access Denied'
Thanks,
Tookers
--
View this message in context:
http://www.nabble.com/New-Admin
Henrik Nordstrom-5 wrote:
tis 2009-09-29 klockan 02:41 -0700 skrev tookers:
Hello all,
I'm running several Squid boxes as reverse proxies, the problem i'm
seeing
is when there are a high number of connections in the region of 80,000
per
Squid at peak I'm getting 1,000's of TCP_MISS
Hi there,
Why not make use of some of your RAM for cache_mem? It will make requests
for smaller, more frequently requested files a hell of a lot quicker, and it
should give you a better hit ratio.
# 3GB process size limit in 32bit, so don't set higher than 1 GB for a very
busy cache
cache_mem
922Mbps @ 68,000 connections, system resources are readily available with
Squid only ever using 30% CPU at peaks and over 4.5GB RAM free. The systems
also have 100,000 file descriptors available.
Any help / tips will be much appreciated.
Thanks,
Tookers
Squid Conf:-
http_port x.x.x.x:80 act-as-origin
to be caching. I can refresh webmin system info every few
hours and see that /cache is growing in space used. Although, very slowly.
Amos Jeffries tookers; I've taken the working squid.conf (above), and
applied your suggestions to it (below). Please review this squid.conf
(below) and make suggestions
Roger Cornelius wrote:
Apologies for what is probably a newbie question. I've searched the
squid directives, archives of this list, the net, etc., and haven't
discovered, or didn't recognize, the answer.
I'm using squid 2.7.STABLE6 in accelerator mode. I used the basic
accelerator
Chris Hostetter wrote:
: What would be really nice is a command line option and a bit of code
: in the cache peer setup that recognizes own IP and ignores the entry,
: to make this problem just all go away...
That would be awesome, but if i'm understanding you correctly it would
14 matches
Mail list logo