-- Robert Hicks <[EMAIL PROTECTED]>
> I have been handed a couple of requests to change some data. One is an
> update on 3.6 million records and the other is two different deletions of
> over 16 million rows of data. I need to be able to do these in batches as
> I am told that Oracle will "blow up" if I do them in one shot.
The issue is rollback/redo space. If you have too large
of a transaction pending you can run out of space and
it abort the transaction. You may have sufficient space
to blow off this many records, but it'll be a True Pain
to find out the hard way...
> I would like to do them in increments of 50,000 then a commit...then the
> next 50,000 and then another commit.
If you are updating the records using their primary
key then just use
for( @updatz )
{
...
$dbi->commit unless ++i % 10; # every 10th update is a commit.
}
For the deletes, fry them in blocks based on a count.
I've forgotten if Oracle supports delete ... where count
< X to delete only that many rows.
If there is a surrogate (i.e., integer key) use that instead
of the rowid:
my $dbh
= DBI->connect
(
'dbi:Oracle:blah:blah',
$user,
$pass,
{
AutoCommit => 1,
RaiseError => 1,
}
);
my $sth
= $dbh->prepare
(
delete from
table
where
rowid < ?
and
whatever_else
);
for( my $i = 0 ; $i < $table_rows ; $i += $max_deletes )
{
$sth->execute( $i );
}
__END__
--
Steven Lembark 85-09 90th Street
Workhorse Computing Woodhaven, NY 11421
[EMAIL PROTECTED] 1 888 359 3508