I take advantage of a handy feature of apache that allows log data to be
piped to another program instead of written to a file - and pipe it's log
data directly to stdin of a small C program which does a tiny bit of parsing
before putting it into mysql.
Before I did it in C, I was just using a Php script - which, incidentally,
was fine - I only redid it in C in the interests of saving a bit of cpu and
memory on the heavily worked servers it's being utilised on.
The beauty of this is that your data is always up to date, you can log any
number of servers to the one central database via tcp (or multiple databases
if you prefer) and you don't have to manage (potentially) unwieldy logfiles.
Also, if at a later date you wanted to go back to using a regular text-file
parser for your stats, there'd be nothing stopping you from dumping out the
database contents back into the Apache log format again.
So far the only negative I can see is that if mysql is down, you lose your
log data for the duration of the downtime. Not a big negative to me, since I
like to make sure mysql is running at least as often as the webserver is.
jason
> the problem with that is as soon as you run a load-baanced installation,
> collecting apache logs start to be a pain in the a$$ :)
>
> I have given some thought to the logging thing, but am still undecided re:
> letting apache do its thing, and writing scripts to aggregate the logs, or
> turning off apache logging and going to the DB.
>
> problem is this puts an annoying amount of stress on the production DB, so
> there you have it, the dilemma :)
>
> -a
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]