On 10/07/10 03:54, Kevin Grittner wrote:
Mark Kirkwood<mark.kirkw...@catalyst.net.nz>  wrote:

Purely out of interest, since the old repo is still there, I had a
quick look at measuring the overhead, using 8.4's pgbench to run
two custom scripts: one consisting of a single 'SELECT 1', the
other having 100 'SELECT 1' - the latter being probably the worst
case scenario. Running 1,2,4,8 clients and 1000-10000 transactions
gives an overhead in the 5-8% range [1] (i.e transactions/s
decrease by this amount with the scheduler turned on [2]). While a
lot better than 30% (!) it is certainly higher than we'd like.

Hmmm...  In my first benchmarks of the serializable patch I was
likewise stressing a RAM-only run to see how much overhead was added
to a very small database transaction, and wound up with about 8%.
By profiling where the time was going with and without the patch,
I narrowed it down to lock contention.  Reworking my LW locking
strategy brought it down to 1.8%.  I'd bet there's room for similar
improvement in the "active transaction" limit you describe. In fact,
if you could bring the code inside blocks of code already covered by
locks, I would think you could get it down to where it would be hard
to find in the noise.


Yeah, excellent suggestion - I suspect there is room for considerable optimization along the lines you suggest... at the time the focus was heavily biased toward a purely DW workload where the overhead vanished against large plan and execute times, but this could be revisited. Having said that I suspect a re-architect is needed for a more wideranging solution suitable for Postgres (as opposed to Bizgres or Greenplum)

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to