Hi I'm trying to do a DELETE FROM on my large table (about 800 million rows) based on the contents of another, moderately large table (about 110 million rows). The command I'm using is:
DELETE FROM records WHERE id NOT IN (SELECT id FROM unique_records); This process ran for about two weeks before I decided to stop it -- it was dragging down the DB server. I can understand long-running processes, but two weeks seems a bit much even for a big table. Is this the best way to approach the problem? Is there a better way? Some background: The server is version 8.3, running nothing but Pg. The 'records' table has 'id' as its primary key, and one other index on another column. The table is referenced by just about every other table in my DB (about 15 other tables) via foreign key constraints, which I don't want to break (which is why I'm not just recreating the table rather than deleting rows). Most of the dependent tables have ON DELETE CASCADE. The 'unique_records' table is a temp table I got via something like: SELECT DISTINCT (other_column) id INTO unique_records FROM records Thanks very much! Josh Leder -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql