> > Yes, and yes.   Simply allocating more checkpoint segments (which can eat
> > a lot of disk space -- requirements are 16mb*(2 * segments +1) ) will
> > prevent this problem.
> Hmm?  I disagree -- it will only make things worse when the checkpoint
> does occur.

Unless you allocate enough logs that you don't need to checkpoint until the 
load is over with.   In multiple data tests involving large quantities of 
data loading, increasing the number of checkpoints and the checkpoint 
interval has been an overall benefit to overall load speed.   It's possible 
that the checkpoints which do occur are worse, but they're not enough worse 
to counterbalance their infrequency.

I have not yet been able to do a full scalability series on bgwriter.

Josh Berkus
Aglio Database Solutions
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to