** John,

I use the one true editor - VI :-)

OK, on Windows and Unix/Linux its GVIM, VI - improved.  All the regex, replace, search, pattern matching, copy/paste, language- and file-specific style sheets one could ask for...

Doug


On Apr 22, 2010, at 8:59 AM, John Sundberg wrote:

**

Speaking of logs -- what do people use to read them?

Does anybody use splunk -- do you like it -- does it help?

-John




On Apr 21, 2010, at 4:57 PM, Grooms, Frederick W wrote:

**
And since Anne is on Linux she can set up a cron job to archive the logs every 5 (or 10) minutes.   I do that currently on production so I can always go back a complete day in the logs.
 
Fred
 
From: Action Request System discussion list(ARSList) [mailto:[email protected]] On Behalf Of Benedetto Cantatore
Sent: Wednesday, April 21, 2010 3:59 PM
To: [email protected]
Subject: Re: Log size and server performance
 
**
I found 500 megs to be a good size.  I can usually capture what I'm looking for within a 10-15 minute window. 
 
Ben Cantatore
Remedy Manager
(914) 457-6209
 
Emerging Health IT
3 Odell Plaza
Yonkers, New York 10701

>>> [email protected] 04/21/10 12:29 PM >>>
**
I ask because I know appending to a 1 G file takes a lot longer (in computer time) than appending to a 1 M file.  I was wondering if anyone was aware of a practical limit?
 

Anne Ramey

E-mail correspondence to and from this address may be subject to the North Carolina Public Records Law and may be disclosed to third parties only by an authorized State Official.
 
From: Action Request System discussion list(ARSList) [mailto:[email protected]] On Behalf Of Lyle Taylor
Sent: Wednesday, April 21, 2010 12:09 PM
To: [email protected]
Subject: Re: Log size and server performance
 
**
Well, this isn’t a definitive answer by any means, but my suspicion would be that the log file size should be pretty much irrelevant from a performance perspective, since it is just appending to the existing file, which is a quick operation.  The more important point is that if you’re getting that much logging output, just having logging on at all is probably impacting performance on the server.  So, if the performance of the system seems acceptable with logging turned on, you should be able to let it run as long as you want, at least until you either meet you maximum file size or fill up the file system you’re logging to without any additional performance impact due to the size of the log files.  Now, how to do something useful with such large files is another question…
 
Lyle
 
From: Action Request System discussion list(ARSList) [mailto:[email protected]] On Behalf Of Ramey, Anne
Sent: Wednesday, April 21, 2010 9:49 AM
To: [email protected]
Subject: Log size and server performance
 
**
We are looking at capturing more effective logging to try and catch some interrmittent problems in production that we can't seem to re-produce in test.  The problem is that the arfilter log on our server that runs escalations is currently 50M and contains about 2 minutes worth of information.  This is, obviously, because of the notifications, but I'm curious as to what point I can increase my log file sizes before I start to see a perfomance hit.  Any ideas/experiences?
 
ITSM 7.0.03 P9
ARS 7.1 P6
Linux
Oracle
 
It looks like 100M would catch a 1/2 hour of information or longer in all logs except the arfilter (but we have to set all of the log files to the same size).  500M might get us a 1/2 hour in the filter log, but the other logs will be unnecessarily big and I'm wondering if having all of the logs that size could cause server response time to slow?
 

Anne Ramey

 
 
 
_attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_

--
John Sundberg

Kinetic Data, Inc.

"Building a Better Service Experience"

Recipient of the WWRUG09 Innovator of the Year Award








_attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_



Doug

--
Doug Blair
+1 224-558-5462

200 North Arlington Heights Road
Arlington Heights, Illinois 60004



_attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_

Reply via email to