Re: [PERFORM] Massive Inserts Strategies

2006-03-28 Thread Jim C. Nasby
Is there some other unique key you can test on? Take a look at http://lnk.nu/cvs.distributed.net/8qt.sql lines 169-216 for an exammple. In this case we use a different method for assigning IDs than you probably will, but the idea remains. On Tue, Mar 28, 2006 at 10:59:49AM -0500, ashah wrote: > I

Re: [PERFORM] Massive Inserts Strategies

2006-03-28 Thread Markus Schaber
Hi, ashah, ashah wrote: > I tried this solution, but ran into following problem. > > The temp_table has columns (col1, col2, col3). > > The original_table has columns (col0, col1, col2, col3) > Now the extra col0 on the original_table is the unique generated ID by > the database. INSERT INTO o

Re: [PERFORM] Massive Inserts Strategies

2006-03-28 Thread ashah
I tried this solution, but ran into following problem. The temp_table has columns (col1, col2, col3). The original_table has columns (col0, col1, col2, col3) Now the extra col0 on the original_table is the unique generated ID by the database. How can I make your suggestions work in that case .

Re: [PERFORM] Massive Inserts Strategies

2006-03-22 Thread Jim C. Nasby
Load the files into a temp table and go from there... COPY ... FROM file; UPDATE existing_table SET ... WHERE ...; INSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS( SELECT * FROM existing_table WHERE ...) On Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote: > I have a databa

Re: [PERFORM] Massive Inserts Strategies

2006-03-22 Thread PFC
For both cases, you could COPY your file into a temporary table and do a big JOIN with your existing table, one for inserting new rows, and one for updating existing rows. Doing a large bulk query is a lot more efficient than doing a lot of selects. Vacuum afterwards, and you'll be fine.

[PERFORM] Massive Inserts Strategies

2006-03-22 Thread ashah
I have a database with foreign keys enabled on the schema. I receive different files, some of them are huge. And I need to load these files in the database every night. There are several scenerios that I want to design an optimal solution for - 1. One of the file has around 80K records and I ha