We are looking at capturing more effective logging to try and catch some 
interrmittent problems in production that we can't seem to re-produce in test.  
The problem is that the arfilter log on our server that runs escalations is 
currently 50M and contains about 2 minutes worth of information.  This is, 
obviously, because of the notifications, but I'm curious as to what point I can 
increase my log file sizes before I start to see a perfomance hit.  Any 
ideas/experiences?

ITSM 7.0.03 P9
ARS 7.1 P6
Linux
Oracle

It looks like 100M would catch a 1/2 hour of information or longer in all logs 
except the arfilter (but we have to set all of the log files to the same size). 
 500M might get us a 1/2 hour in the filter log, but the other logs will be 
unnecessarily big and I'm wondering if having all of the logs that size could 
cause server response time to slow?

Anne Ramey



_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
attend wwrug10 www.wwrug.com ARSlist: "Where the Answers Are"

Reply via email to