Something you could attempt is to build an "in" statement for the where
clause allowing more rows to update per trip to the database server.
Pseudo Code:
my $upd_sql = 'update .... set .... where email in ( ? ... )';
# ? based on number of email addresses to send at once.
While (my $email=<>) {
push @emails, $email;
if(scalar(@emails) >= 20) {
$sth->execute(@emails);
check success ...
# clear emails list
@emails=();
...
next;
}
}
# After the while loop exits, determine if any emails are left to
# update:
if(@emails) {
...
}
You'd have to benchmark this to see if it really make a difference.
Also, it would only work on a "simple" update. By simple, I mean
setting the same columns with same values for each email address.
Tom
On Mon, Jun 03, 2002 at 12:00:04PM -0700, Greg D. wrote:
> Hello,
>
> I was wondering if there was anyway i could speed up the amount
> of time it takes for my script to run. I know my DB table is quite big, but
> is there anyway to speed up the update in my script or is there something i
> can do in the database to speed it up.
>
> Thanks for any help
>
> here a copy of my script:
>
> #!/usr/bin/perl
>
> use DBI;
>
> close STDERR;
> open (STDERR,">>/tmp/ktjunsub.log");
>
> $ktjpath = "/var/www/mail";
> $ktjlist = "ktj.lst_rem";
>
> $database = "DBI:mysql:maillist:localhost:3306";
> $username = "********";
> $passwd = '*******';
> $dbh = DBI->connect($database, $username, $passwd) || die "Error connecting"
> .. $dbh->errstr;
>
> print "Updating KTJ table\n";
>
> open(A, "$ktjpath/$ktjlist") || &death("Can't open $ktjlist: um121");
>
> my $sql_statement = "UPDATE ktj SET ktjnews = 'N' WHERE email = ?";
> my $sth = $dbh->prepare($sql_statement) || die "Error1 preparing update
> statement on track:\n" . $dbh->errstr;
>
> while (<A>){
>
> print "$_";
> $sth->execute($_) || die "Error2 executing update statement on
> track:\n" . $sth->errstr;
> }
> close A;
>
> print "\nfinished\n";
>
> close STDERR;
--
Thomas A. Lowery
See DBI/FAQ http://xmlproj.dyndns.org/cgi-bin/fom