Okay. I updated my benchmark page with new numbers, which are the result of extensive pgbench usage over this past week. In fact, I modified pgbench (for both of the latest version of postgres) to be able to accept multiple iterations as an argument and report the results of each iteration as well as a summary of mean tps at the end. The modifications of the source are included on the new page, and I'd be happy to submit them as patches if this seems like useful functionality to the developers and the community. I find it nicer to have pgbench be the authoritative source of iterative results rather than a wrapper script, but it'd be nice to have an extra set of eyes guarantee that I've included in the loop everything that ought to be there.

A couple of notes:

* There was some interesting oscillation behavior in both version of postgres that occurred with 25 clients and 1000 transactions at a scaling factor of 100. This was repeatable with the distribution version of pgbench run iteratively from the command line. I'm not sure how to explain this.

* I'm not really sure why the single client run at 1000 transactions seemed so much slower than all successive iterations, including single client with 10000 transactions at a scaling factor of 100. It's possible that I should be concerned about how throughput was so much higher for 10000 transactions.

Anyway, the code changes, the configuration details, and the results are all posted here:

http://www.sitening.com/pgbench.html

Once again, I'd be curious to get feedback from developers and the community about the results, and I'm happy to answer any questions.

-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Apr 15, 2005, at 4:23 PM, Tom Lane wrote:

"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
http://www.sitening.com/pgbench.html

You need to run *many* more transactions than that to get pgbench
numbers that aren't mostly noise. In my experience 1000 transactions
per client is a rock-bottom minimum to get repeatable numbers; 10000 per
is better.


Also, in any run where #clients >= scaling factor, what you're measuring
is primarily contention to update the "branches" rows. Which is not
necessarily a bad thing to check, but it's generally not the most
interesting performance domain (if your app is like that you need to
redesign the app...)


To me, it looks like basic transactional performance is modestly
improved at 8.0 across a variety of metrics.

That's what I would expect --- we usually do some performance work in every release cycle, but there was not a huge amount of it for 8.0.

However, these numbers don't prove much either way.

regards, tom lane


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to