On Tue, 21 Apr 2009, da...@lang.hm wrote:

1) Disk/controller has a proper write cache. Writes and fsync will be fast. You can insert a few thousand individual transactions per second.

in case #1 would you expect to get significant gains from batching? doesn't it suffer from problems similar to #2 when checkpoints hit?

Typically controllers with a write cache are doing elevator sorting across a much larger chunk of working memory (typically >=256MB instead of <32MB on the disk itself) which means a mix of random writes will average better performance--on top of being able to aborb a larger chunk of them before blocking on writes. You get some useful sorting in the OS itself, but every layer of useful additional cache helps significantly here.

Batching is always a win because even a write-cached commit is still pretty expensive, from the server on down the chain.

I'll see about setting up a test in the next day or so. should I be able to script this through psql? or do I need to write a C program to test this?

You can easily compare things with psql, like in the COPY BINARY vs. TEXT example I gave earlier, that's why I was suggesting you run your own tests here just to get a feel for things on your data set.

--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to