On Tue, 12 Aug 2003, mixo wrote:

> that I am currently importing data into Pg which is about 2.9 Gigs.
> Unfortunately, to maintain data intergrity, data is inserted into a table
> one row at a time.'

So you don't put a number of inserts into one transaction?

If you don't do that then postgresql will treat each command as a
transaction and each insert is going to be forced out on disk (returning
when the data is just in some cache is not safe even if other products
might do that). If you don't do this then the server promise the client
that the row have been stored but then the server goes down and the row
that was in the cache is lost. It's much faster but not what you expect
from a real database.

So, group the inserts in transactions with maybe 1000 commands each. It 
will go much faster. It can then cache the rows and in the end just make 
sure all 1000 have been written out on disk.

There is also a configuration variable that can tell postgresql to not 
wait until the insert is out on disk, but that is not recomended if you 
value your data.

And last, why does it help integrity to insert data one row at a time?


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?


Reply via email to