Is that problem already solved ?
Because I have exactly the same problem here.
CHeers, Leon.
Nope. We got into some really heaving T-shooting, too much for the list. Trying to get
some useful debugging, but having a hard time trying to make sense of it. I suspect I
may have a hardware issue,
that a faulty stick of memory can cause this? There are no entries in my
/var/log/messages log. TIA
Mark Pelkoski
On Mon, 15 Mar 2004, Mark Pelkoski wrote:
Henrik,
If you were getting these errors, how would you T-shoot this
I just built this box about two weeks ago. It has 4 separate diskd
stores for cache. I get about 20 - 30 WARNING: failed to unpack
meta data messages per MINUTE
-Original Message-
From: Mark Pelkoski
Sent: Monday, March 15, 2004 4:35 PM
To: 'Duane Wessels'
Duane,
Thank you for your response! I have a 34% hit rate. About the same
for
the server that is not producing these errors.
We can start to narrow it down with the attached patch
Can ANYBODY Help me with T-shooting this???
-Mark
-Original Message-
From: Mark Pelkoski
Sent: Wednesday, March 10, 2004 9:18 AM
To: Elsen Marc; [EMAIL PROTECTED]
Subject: RE: [squid-users] LOTS of WARNING: failed to unpack meta
data
List,
I just built this box about a week ago
a faulty drive. This is
a production system that I can't exactly play around with.
What is the best way to T-shoot this? TIA
Mark Pelkoski
- Check system (error) logging, watch for error(s) related to
disk (sub)system.
M.
My messages log is clean. I also tried commenting out each
. This is a production system that I can't
exactly play around with. What is the best way to T-shoot this? TIA
Mark Pelkoski
DOES ANYBODY HAVE AN IDEA ABOUT THIS???
-Original Message-
From: Mark Pelkoski
Sent: Wednesday, November 26, 2003 10:27 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] Wb_group error message in cache.log
List,
I keep seeing this error in my cache.log a couple of times a day
Nothing in the smbd.log file. This message shows up randomly giving no
notice to any particular user. Just curious if this is any issue or not.
-Mark
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2003 9:22 AM
To: Mark Pelkoski
Cc: [EMAIL
List,
I keep seeing this error in my cache.log a couple of times a day. Is
this normal or do I have a problem? I require my users to belong to a
certain NT group in order to use Squid. I wasn't seeing it when I tested
it with 70 users. Now I have 800+ users.
Will having multiple Cache_dir's change the footprint of the Squid
process? How is that factored into the formula?
-Mark
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Monday, November 24, 2003 4:28 PM
To: Mark Pelkoski
Cc: Michael R. Wayne; [EMAIL PROTECTED
Thanks. Last question... If I were to migrate from Redhat for a pure
Squid server, what Linux or BSD flavor might you suggest?
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 25, 2003 1:50 PM
To: Mark Pelkoski
Cc: Henrik Nordstrom; [EMAIL
This has happened to me before... Look in your cache.log for something
like WARNING: Your cache is running out of file descriptors. If you
have this, there are FAQs on how to fix this.
-Mark
-Original Message-
From: Maciej Wosko [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 25, 2003
) Check your ethernet. Both switch and card should NOT be auto
negotiate.
And you should be using decent Etherent cards (e.g. not RealTek) and
switches (e.g. not cheap Netgear).
325 users should not be hammering you much at all.
/\/\ \/\/
On Fri, Nov 21, 2003 at 01:15:40PM -0700, Mark
was pretty good for 800 Reqs/Min. Now in Production I have
325+ users at 2700 Reqs/Min and performance stinks. It's like being on a
dial-up connection. 3 of the procs are sitting below 1%. The other is
used by Squid at 99.9%. It there any way to speed up performance on a
multi-proc system? TIA.
Mark
List,
The server I am running squid on has 2 Gigs of Physical Memory and a 2
Gig Swap partition. I have my Memory Usage Limit set to 1Gig in my Squid
.conf, but Squid often runs over 1 gig into my swap space after about a
month of uptime. How can I limit the memory footprint so it stays in the
List,
I am having difficulty getting to my Cache Manager. When cachemgr.cgi is
executed, I submit the port 8080, which is the port my squid is running
on, and no user or password, and I receive this screen:
ERROR
Cache Access Denied
While trying to retrieve the URL: cache_object://localhost/
-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Monday, July 07, 2003 10:02 AM
To: Mark Pelkoski
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Wb_ntlmauth breaks persistant_request_timeout?
mån 2003-07-07 klockan 16.55 skrev Mark Pelkoski:
This appears to be a bug to me. I have 800
Adam,
I came in this morning and tested this config again, and it is not
working. I restarted the Squid service and this did not help. It looks
like the timeout is back to 1 minute, but the conf file has a
persistent_request_timeout of 30 minutes which was working yesterday. I
don't understand WHY
I came in this morning and tested this config again, and it is not
working. I restarted the Squid service and this did not help. It looks
like the timeout is back to 1 minute, but the conf file has a
persistent_request_timeout of 30 minutes which was working yesterday.
I don't understand WHY
I have just installed Squid on my RedHat Linux 7.3 machine and have it
configured with mostly the defaults and I control it through webmin.
Problem is, when I configure my browser to use that box as a proxy
server I get the following error...
-
and Checkpoint software download center.
I am using Squid 2.5 Stable-2 in conjunction with wb_group auth. Please
help. Looks like I might loose this one to Microsoft's Proxy server if I
can't get this server working right. Please let me know if you need conf
files or packet traces. Thanks in advance.
-Mark
I sure have looked. The problem is many people describe the problem in
different ways, so searching was hard. Thanks for the settings. I will
try these out. I DON NOT M$ to win this one. I'm trying to decommission
2 M$ Proxy servers.
-Mark
-Original Message-
From: Adam Aube
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 02, 2003 11:22 AM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] How to fix active page time-outs? PLEASE HELP
Why? If it works...
Jim
-Original Message-
From: Mark Pelkoski [mailto:[EMAIL
Okay, here are my new settings:
half_closed_clients on
request_timeout 10 minutes
persistent_request_timeout 5 minutes
I opened up a Yahoo account to test. It seems the connection does stay
open up to 5 minutes (Better then before), then dies. So, the answer
would be to up the
file to try and eliminate these breakdowns? I have
a RH9 box with the RPM of 2.5Stable1-2 installed and mostly default
settings for testing. I know it is Squid because I can bypass it or use
MS Proxy 2.0 and receive no errors. TIA.
Mark Pelkoski
is hitting its filters. Also, is
there any way to have USERS in the always_direct line? Any help is
appreciated!
Mark Pelkoski, MCP, CCSE
IT Security Analyst L3
Phone: (208) 893-3233
Text Page: [EMAIL PROTECTED]
[EMAIL PROTECTED]
27 matches
Mail list logo