Hello!

One important use case in my libpq based application (PostgreSQL 8.1.4) is a 
sort of massive data loading.

Currently it is implemented as a series of plain normal INSERTs
(binary form of PQexecParams is used) and the problem here it is pretty slow.

I've tried to play with batches and with peculiar constructions
like INSERT (SELECT .. UNION ALL SELECT ..) to improve performance, but not 
satisfied with the result I've got.

Now I try to figure out if it is possible to use COPY FROM STDIN instead of 
INSERT if I have to insert, say, more then 100 records at once. 

Hints are highly appreciated.

The only limitaion mentioned in Manual is about Rules and I don't care about 
this since I don't use Rules. 
Am I going to come across with any other problems (concurrency, reliability, 
compatibility, whatever) on this way? 

Many thanks.

-- 
Best regards
Ilja Golshtein

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to