Hello all,
could someone please tell me why the procedure
below(Author:Nick Butcher) takes less than a minute ona table with 50,000 rows and about 21
mins on a table with 235,000 rows??
i have created a bigger rollback segment to take care of this,
but no improvement.where should i be
]]On Behalf Of
[EMAIL PROTECTED]Sent: Monday, December 03, 2001 3:10
PMTo: Multiple recipients of list ORACLE-LSubject:
deleting duplicate records
Hello all,
could someone please tell me why the procedure
below(Author:Nick Butcher) takes less than a minute ona table with 50,000 rows
Title: RE: help with deleting duplicate records from very large table
Hi, Suhen
The following is a set of notes I have cut from various list messages on deleting duplicates. For the 60M rows you are talking about the first option looks the best (committing frequently). Can I suggest you
Suhen,
I have a similar problem at the moment although its 900 million rows of
which we want to delete approx. 200 million (they obviously never asked you
guys for help with their design!). Unfortunately we have just decided to
take the hit with downtime to correct the problem. We have
Hi Kevin and Suhen,
I believe that there are only two variants that
appropriate for such huge tables.
1. I remember a good tip from Steve Adams. He recommended
to use CTAS or INSERT AS SELECT for a similar case.
Of course, this method requires a lot of free space.
2. You may think about
Ed,
Thanks for that...I wasn't even aware of the 'exceptions into
exceptions_table' clause. Unfortunately for me it's not duplicated I'm
deleting, it's just a shed load of data :(
regards,
K.
hit any user to continue
__
Kevin Thomas
Technical Analyst
Deregulation Services
List,
I need to delete duplicate records from a very large table
(60 millions records +).
There would be about 3 million duplicate entries.
What is the quickest way to do this?
The syntax that I am using is
delete from invaudee
where rowid not in (select min(rowid)
from