Steve,

Per your and Tom's recommendations, I significantly increased the number of transactions used for testing. See my last post.

The database will have pretty heavy mixed use, i.e., both reads and writes.

I performed 32 iterations per scenario this go-round.

I'll look into OSDB for further benchmarking. Thanks for the tip.

Since pgbench is part of the postgres distribution and I had it at hand and it seems to be somewhat widely referenced, I figured I go ahead and post preliminary results from it.

-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Apr 15, 2005, at 4:24 PM, Steve Poe wrote:

Tom,

People's opinions on pgbench may vary, so take what I say with a grain of salt. Here are my thoughts:

1) Test with no less than 200 transactions per client. I've heard with less than this, your results will vary too much with the direction of the wind blowing. A high enough value will help rule out some "noise" factor. If I am wrong, please let me know.


2) How is the database going to be used? What percentage will be read/write if you had to guess? Pgbench is like a TPC-B with will help guage the potential throughput of your tps. However, it may not stress the server enough to help you make key performance changes. However, benchmarks are like statistics...full of lies <g>.


3) Run not just a couple pgbench runs, but *many* (I do between 20-40 runs) so you can rule out noise and guage improvement on median results.

4) Find something that you test OLTP-type transactions. I used OSDB since it is simple to implement and use. Although OSDL's OLTP testing will closer to reality.

Steve Poe


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to