I'm writing a daemon to monitor a squid logfile, and put it's contents into
a MySQL database. It looks something like this:
my $dbh = DBI->connect($dsn, $user, $passwd);
open(LOGFILE, $log) or die "yikes, no logfile? $!\n";
for(;;) {
$dbh ||= DBI->connect($dsn, $user, $passwd); ## only re-connect if handle
has gone away?
while(<LOGFILE>) {
$dbh->do(qq{INSERT blah, blah, blah into blah})
}
}
## end of example
But when I run the script, it takes up loads of processor time. I'm
assuming that it's the reconnect line which is causing the problem. I can't
remember where I got that syntax from, and I'm not 100% sure it's sane.
I'm running DBI v1.34, and the manpage says I should discuss connect_cached
here. So, here I am. What's the status of connect_cached? Is it useful or
lethal?
The squid server (s) are going to be *very* busy, as they server all the
libraries in our County, as well as all the employees of the County Council.
At peak times and in bursts, there could be thousands of requests per second
to each cache. I want to log this activity in real time, because the volume
of information at the *end* of a day is enormous and unwieldy to deal with.
All info and suggestions appreciated.
Regards
Dominic Pain