I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing research to see how postgres doing, I used copy utility to import data from a text file which contains 32 mils rows, it has been 26 hours passed, but still running. My question is how postgres handles such data loading? it commited every row? or commit point is adjustable? How? Does postgres provide direct load to disk files like oracle? Other ways to speed up? If loading performance can't be improved significantly, we have to go back to oracle. Anybody can help? Thanks!
Anna Zhang ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]