mark d wrote:
On 1/18/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
---- mark d <[EMAIL PROTECTED]> wrote:
This optimization might have made some sense decades over a serial line, but
not today.  It is saving 320 single-packet round trips, which is an
infintesimal amount of traffic.
Well, traffic wise, you're right.  But, it also saves parsing all those rows 
out of the
DB and through the DBI; this way the only process that has to work with the data
is the oracle engine.  Use DBI where appropriate...  Right tool for the job, 
and all that...

I have no idea what "parsing all those rows out of the DB and through the DBI"
means, though it sounds like you think row data is copied from the DB to
the DBI in the course of a sql DELETE statement (that would be a fundamental
misunderstanding of SQL and DBI.)

In any case, you're mistaken that the PL/SQL saves anything but a few packets.
To be explicit, the proposed PL/SQL was:

declare
rows_processed number;
batch_size number := 50000;
begin
rows_processed := batch_size;
while rows_processed = batch_size loop
    delete from table where rownum < batch_size + 1;
    rows_processed := SQL%ROWCOUNT;
    commit;
end loop;
end;

The equivalent DBI code is:

$sth = $dbh->prepare("delete from TABLE where rownum < 50001");
while ( ($deleted = $sth->execute) > 0) {
    printf "deleted $deleted\n";


The only difference between the two is a few network packets.

Is that the only difference? I read it, and I could be wrong, that your SQL will stop when rownum hits 50001, while his will keep going in batches of 50000 until there isn't any more to delete given the delete statement.

Yours is a simple delete. I need to break up my deletes into manageable sizes, do the delete, do the next batch, etc.

On the delete side I have about 16 million records that will be deleted. I have two of those to do. I have one update that changes a few values on about 3.1 million records (not the whole table).

Robert

Reply via email to