On 06/27/2013 11:13 PM, Jeff Janes wrote: > Wouldn't any IO system being used on a high-end system be fairly good > about making this work through interleaved read-ahead algorithms?
To some extent, certainly. It cannot possibly get better than a fully sequential load, though. > That sounds like it would be much more susceptible to lock contention, > and harder to get bug-free, than dividing into bigger chunks, like whole > 1 gig segments. Maybe, yes. Splitting a known amount of work into equal pieces sounds like a pretty easy parallelization strategy. In case you don't know the total amount of work or the size of each piece in advance, it gets a bit harder. Choosing chunks that turn out to be too big certainly hurts. Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers