> CREATE TABLE dup_killer (member_id INTEGER, date DATE); INSERT INTO > dup_killer (member_id, date) SELECT * FROM talks GROUP BY member_id, > date HAVING count(*)>1; > > But, now that I have the copies in the dup_killer table, I have not > been able to discover an efficient way to go back to the original > table (talks) and delete them. My plan was to delete all the records > from talks that match the criteria of the records now in dup_killer, > and then INSERT the records from dup_killer back into talks before > DROPPING dup_killer. At this point, I am stuck. Is there an > efficient method to do this with SQLite, or should I just use a shell > script?
Add another column to your dup_killer table, and update that column from the talks table (after initially populating it). Then join the two tables to delete. CREATE TABLE dup_killer (member_id INTEGER, date DATE, dup_id); INSERT INTO dup_killer (member_id, date) SELECT * FROM talks GROUP BY member_id, date HAVING count(*)>1; UPDATE dup_killer set dup_id = talk.talk_id FROM talks WHERE duplicate determining columns are equal; DELETE talks FROM dup_killer where dup_killer.dup_id = talks.talk_id; Syntax not checked. :) Oh, and I don't remember if it's true for SQLite or not, but for some DBMS, "count(*)" is slower than "count(column_name)". If your table is wide, it might make a difference. /bs _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users