Hi John,

We have Splunk tied into some of our systems here.  I am hoping when I have
some "spare" time to try it out with Remedy logs.  I also downloaded it at
home but haven't had that spare time there either.

Jason

On Fri, Apr 23, 2010 at 12:34 PM, John Sundberg <
john.sundb...@kineticdata.com> wrote:

> **
>
> That is funny -- but I was thinking the same thing. (why use an editor) :)
>
> Which is why I was wondering if anybody uses splunk -- it seems like a
> smart idea -- not an editor -- but an actual viewer.
>
>
> Ultimately -- I would like a tool/strategy to open wicked big files (that
> may be open for writing by another process) -- and then start jumping
> around.
>
> Example:
>
> Open log.
>
> Type a pattern of some sort -- like a table name -- then everything goes
> away except for the lines with the table name
> Then type another pattern -- and more goes away
>
> then -- when I move up/down -- and select lines -- it auto adds 5 to 10
> lines above and below (from the raw log) -- so I can get some context.
>
>
> Something like that would be nice.
>
> And -- something that could combine multiple logs into one (which is what I
> think splunk does)
>
>
>
>
>
> So -- you could look at a sql log and a filter log together - even though
> they are separate files.
>
>
>
>
>
>
>
> -John
>
>
>
>
> On Apr 23, 2010, at 11:35 AM, Jarl Grøneng wrote:
>
> **
> Why do you need to edit the logfiles? :-)
>
> --
> Jarl
>
> 2010/4/23 Doug Blair <d...@blairing.com>
>
>> **
>> John,
>>
>> I use the one true editor - VI :-)
>>
>> OK, on Windows and Unix/Linux its GVIM, VI - improved.  All the regex,
>> replace, search, pattern matching, copy/paste, language- and file-specific
>> style sheets one could ask for...
>>
>> Doug
>>
>>
>> On Apr 22, 2010, at 8:59 AM, John Sundberg wrote:
>>
>> **
>>
>> Speaking of logs -- what do people use to read them?
>>
>> Does anybody use splunk -- do you like it -- does it help?
>>
>> -John
>>
>>
>>
>>
>> On Apr 21, 2010, at 4:57 PM, Grooms, Frederick W wrote:
>>
>> **
>> And since Anne is on Linux she can set up a cron job to archive the logs
>> every 5 (or 10) minutes.   I do that currently on production so I can always
>> go back a complete day in the logs.
>>
>> Fred
>>
>> *From:* Action Request System discussion list(ARSList) [
>> mailto:arslist@ARSLIST.ORG <arslist@ARSLIST.ORG>] *On Behalf Of *Benedetto
>> Cantatore
>> *Sent:* Wednesday, April 21, 2010 3:59 PM
>> *To:* arslist@ARSLIST.ORG
>> *Subject:* Re: Log size and server performance
>>
>> **
>> I found 500 megs to be a good size.  I can usually capture what I'm
>> looking for within a 10-15 minute window.
>>
>> Ben Cantatore
>> Remedy Manager
>> (914) 457-6209
>>
>> Emerging Health IT
>> 3 Odell Plaza
>> Yonkers, New York 10701
>>
>> >>> anne.ra...@its.nc.gov 04/21/10 12:29 PM >>>
>> **
>> I ask because I know appending to a 1 G file takes a lot longer (in
>> computer time) than appending to a 1 M file.  I was wondering if anyone was
>> aware of a practical limit?
>>
>>
>> Anne Ramey
>> *E-mail correspondence to and from this address may be subject to the
>> North Carolina Public Records Law and may be disclosed to third parties only
>> by an authorized State Official.*
>>
>> *From:* Action Request System discussion list(ARSList) [
>> mailto:arslist@ARSLIST.ORG <arslist@ARSLIST.ORG>] *On Behalf Of *Lyle
>> Taylor
>> *Sent:* Wednesday, April 21, 2010 12:09 PM
>> *To:* arslist@ARSLIST.ORG
>> *Subject:* Re: Log size and server performance
>>
>> **
>> Well, this isn’t a definitive answer by any means, but my suspicion would
>> be that the log file size should be pretty much irrelevant from a
>> performance perspective, since it is just appending to the existing file,
>> which is a quick operation.  The more important point is that if you’re
>> getting that much logging output, just having logging on at all is probably
>> impacting performance on the server.  So, if the performance of the system
>> seems acceptable with logging turned on, you should be able to let it run as
>> long as you want, at least until you either meet you maximum file size or
>> fill up the file system you’re logging to without any additional performance
>> impact due to the size of the log files.  Now, how to do something useful
>> with such large files is another question…
>>
>> Lyle
>>
>> *From:* Action Request System discussion list(ARSList) [
>> mailto:arslist@ARSLIST.ORG <arslist@ARSLIST.ORG>] *On Behalf Of *Ramey,
>> Anne
>> *Sent:* Wednesday, April 21, 2010 9:49 AM
>> *To:* arslist@ARSLIST.ORG
>> *Subject:* Log size and server performance
>>
>> **
>> We are looking at capturing more effective logging to try and catch some
>> interrmittent problems in production that we can't seem to re-produce in
>> test.  The problem is that the arfilter log on our server that runs
>> escalations is currently 50M and contains about 2 minutes worth of
>> information.  This is, obviously, because of the notifications, but I'm
>> curious as to what point I can increase my log file sizes before I start to
>> see a perfomance hit.  Any ideas/experiences?
>>
>> ITSM 7.0.03 P9
>> ARS 7.1 P6
>> Linux
>> Oracle
>>
>> It looks like 100M would catch a 1/2 hour of information or longer in all
>> logs except the arfilter (but we have to set all of the log files to the
>> same size).  500M might get us a 1/2 hour in the filter log, but the other
>> logs will be unnecessarily big and I'm wondering if having all of the logs
>> that size could cause server response time to slow?
>>
>>
>> Anne Ramey
>>
>>
>>
>> _attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_
>>
>>    --
>> John Sundberg
>>
>> Kinetic Data, Inc.
>> "Building a Better Service Experience"
>> Recipient of the WWRUG09 Innovator of the Year Award
>>
>> john.sundb...@kineticdata.com
>> 651.556.0930  I  www.kineticdata.com
>>
>>
>>
>>
>>
>>
>>
>> _attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_
>>
>>
>>
>>
>> Doug
>>
>> --
>> Doug Blair
>> d...@blairing.com
>> +1 224-558-5462
>>
>> 200 North Arlington Heights Road
>> Arlington Heights, Illinois 60004
>>
>>
>> <bmc_skilled_pro_ar.png>
>>
>> _attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_
>>
>
> _attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_
>
> --
> John Sundberg
>
> Kinetic Data, Inc.
> "Building a Better Service Experience"
> Recipient of the WWRUG09 Innovator of the Year Award
>
> john.sundb...@kineticdata.com
> 651.556.0930  I  www.kineticdata.com
>
>
>
>
>
>
>
> _attend WWRUG10 www.wwrug.com ARSlist: "Where the Answers Are"_
>

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
attend wwrug10 www.wwrug.com ARSlist: "Where the Answers Are"

Reply via email to