On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> >> Sounds like a locking problem
> 
> This is what I am trying to get at. The reason that I am not addressing
> hardware or OS configuration concerns is that this is not my environment,
> but my client's. The client is running my import software and has a choice
> of how long the transactions can be. They are going for long transactions,
> and I am trying to determine whether there is a penalty for single long
> transactions over a configuration which would allow for more successive
> short transactions. (keep in mind all reads and writes are single-row). 
> 
> There are other people working on hardware and OS configuration, and that's
> why I can't want to get into a general optimization discussion because the
> client is concerned with just this question.
> 

Hi Carlo,

Since the read/writes are basically independent, which is what I take your
"single-row" comment to mean, by batching them you are balancing two
opposing factors. First, larger batches allow you to consolodate I/O and
other resource requests to make them more efficient per row. Second, larger
batches  require more locking as the number of rows updated grows. It may
very well be the case that by halving your batch size that the system can
process them more quickly than a single batch that is twice the size.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to