On 19 Jul 2005, at 12:51, Richard Huxton wrote:
Yves Vindevogel wrote:Met vriendelijke groeten,
>>> So, I must use a function that will check against u1 and u2, and then
That is a good idea. I can delete the ones that would fail my first unique index this way, and then delete the ones that would fail my second unique index and then upload them.insert if it is ok.
I know that such a function is way slower that my insert query.
So - you have a table, called something like "upload" with 20,000 rows and you'd like to know whether it is safe to insert them. Well, it's easy enough to identify which ones are duplicates.
SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f3;
SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f4;
Hmm, why did I not think of that myself.
I've spent a lot of time moving data from one system to another, usually having to clean it in the process. At 9pm on a Friday, you decide that on the next job you'll find an efficient way to do it :-)
Are you saying that deleting these rows and then inserting takes too long?This goes very fast, but not with a function that checks each record one by one.
You could get away with one query if you converted them to left-joins:
INSERT INTO ...
SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL
SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL
The UNION will remove duplicates for you, but this might turn out to be slower than two separate queries.
Bien à vous,
Mail: [EMAIL PROTECTED] - Mobile: +32 (478) 80 82 91
Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76
First they ignore you. Then they laugh at you. Then they fight you. Then you win.
---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly