I can't directly answer your question, but here is what I do on my central servers (where I've experienced 92K logs in one second). I currently rotate logs once per minute. rsyslog creates all the files as <something>-messages

mercury1-p:/var/home/dlang# cat /usr/local/bin/newlogs
#!/bin/sh
#

PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
umask 022
year=`date +%Y`
month=`date +%m`
day=`date +%d`
fdate=`date +%Y%m%d.%H%M`

logroot=/var/log

logroll=$logroot/oldlogs

cd $logroot mkdir -p $logroll/$year/$month/$day >/dev/null 2>/dev/null

mv messages messages.$fdate
ls *-messages |while read file; do
mv $file $file.$fdate ; done
pkill -HUP rsyslogd
gzip -9 *messages.$fdate
mv *messages.$fdate.gz $logroll/$year/$month/$day/


This is not compressing them in parallel (not needed on my system), but to modify it to do so, replace the final gzip line with

ls *messages.$fdate |while read file; do
  gzip -9 $file &
done
wait

and it will do them all in parallel

David Lang
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards

Reply via email to